Time Tracking in Scrum [closed] - project-management

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
Note : Before asking this question I did an exhaustive search, and found little bits of the answer in various other questions, for example:
What is the best resource for
learning Scrum?
Scrum Process
Management - tips, pitfalls,
ideas
Two questions regarding Scrum
However, I feel like this question hasn't been directly addressed (if it has, please let me know).
Do you track time in Scrum as a function of hours/days spent on a task, or simply whether that task is complete or not? Can you adjust those tasks and estimates?
Background: Our new VP of development came from a Scrum environment, and so we're all learning about the process, but one of the things he has brought with him is the concept of very carefully quoting estimates of actual hours each task should require to complete, with the intention of getting more accurate with our estimates over time: thus once a project has started we cannot add new tasks or adjust the hourly estimates on those tasks.
But it was my understanding that agile practices, specifically Scrum, were based upon the concept of tasks being buckets that store individual deliverable goals, and you add/remove/adjust them as the clients' needs evolve after each sprint.
I realize this could potentially be argumentative, but I assume that viewing Scrum as a process, only one of those concepts is the "correct" philosophy for that system.

Do you track time in Scrum as a function of hours/days spent on a task, or simply whether that task is complete or not?
I track the estimated remaining work. This is a must have information. Without this, you can't draw the Burndown Chart. Without the Burndown Chart, you don't know "where" you are, you don't know if your Sprint is still on track or not. This would make this decision tool pretty useless. Yes, the Burndown Chart is not a tracking tool, it's a decision tool.
Can you adjust those tasks and estimates?
Sure!
Actually, the team owns the estimates, nobody else, and it is the job of the ScrumMaster to guaranty that this principle is applied. This should already answer the question. But there are other reasons.
As I said, a Sprint Backlog and a Burndown chart are decision tools and should thus be representative of where you really are. If you hide the reality, if you are not transparent, these tools won't help you to take any valuable decision, they will be useless. Think about it, what's the point of having good looking numbers if they are useless? What's the point of having a "nice looking burndown" if it doesn't reflect the reality.
So, during a Sprint, team members should obviously update the estimations of the remaining work as soon as they can do it (upward or downward). If a task estimate was initially 6h but the team discovers that more work has to be done and that the task will actually take 8h, the team should update the Sprint Backlog accordingly. If someone spent 4h hours on a task that was initially estimated at 4h but still need 2h work, these 2h should be reported on the Sprint Backlog. If the team discovers a task that has to be done but that wasn't identified, the team must add this task and its estimate to the Sprint Backlog. And being not accurate in the start is not a problem, as long as you update the backlog with the knowledge gathered over time. The sooner you make these updates, the sooner you'll be able to adapt and take decisions.
That said, it can be useful to keep the "initial estimate" and to compare it to the "actual time spent to complete". But not for tracking purpose, only to help the team to make better estimates. Actually, I would advice to not do this if you are transitioning to Scrum. There are often many other impediments to solve, many other things to improve first when you are learning Scrum values and principles. And if you do it, beware of the Waterfall daemons. Be ready to fight them, they may come back very fast.

The answers I see here aren't wrong, but I don't think they've really addressed your question.
I think you're asking, "Should I track the total hours actually spent on a certain task?" The answer is, "You can if you need to, but it isn't part of Scrum."
Scrum is a very lightweight process. It defines/requires only what is needed to make Scrum work. You can (and, in many cases, probably should) overlay other processes on top of Scrum in order to suit your organizational needs. For example, if tracking the total hours actually spent on a task enables you to better estimate similar tasks in the future (as it seems your VP wants), then that might be a good reason to track total hours, provided that it doesn't interfere with productive work too much. Or, perhaps you need to know the total hours for billing purposes. So just because Scrum doesn't require something doesn't mean you shouldn't do it.
However, for the purposes of Scrum itself, there is no need to track the total hours actually spent on a task. It is not needed for any of the Scrum artifacts, which only track the estimated amount of time remaining.

I don't know if our implementation is "correct", but what we do is:
Have Backlog Items added, which we put an estimated complexity number on (in relation to other backlog items).
Before each sprint, we go through the backlog items in priority order (prioritized by the product owner), break them down into tasks for which we make a time estimate (in hours).
When the number of available hours in the sprint are used up, the sprint is full
Then, during the sprint after each day of work we adjust the times on the tasks that we have been working on, so that they show the number of hours that we think is left before the task is done. This means that if I have a 6 hour task, work on it for a full day (we consider 6 hours a full day) and then feel that I still have 2 hours left before it's done, then I take down the "hours left" from 6 to 2. In case the task is time-boxed we need to check actual hours used instead, of course.

I have to add something here because
but one of the things he has brought with him is the concept of very carefully quoting estimates of actual hours each task should require to complete, with the intention of getting more accurate with our estimates over time: thus once a project has started we cannot add new tasks or adjust the hourly estimates on those tasks.
Is just plain not scrum so I don't know where your VP got his info. Tasks (know as Sprint Backlog Items) are not created until Planning the next sprint. They are created just in time and certainly not before the project starts. Before the project starts (Sprint 0), the Product Owner creates the Product Backlog and fills it with stories. He can add to it at ANY time during the project. It is his to manage. The team estimates these stories roughly against one another in story points or some other relative measure (ideal days?).
The estimating of tasks in hours is only a tool the team uses to figure out how many stories to commit to in the sprint and then to plot progress to predict success (burndown). Once a team has gelled and has a historical velocity; it may decide to not do any tracking in hours at all and just track their burndown in story points or # of stories. Estimating in hours is a form of waste in itself if the team does not need it to achieve commitment to the sprint goals.
I would ask the VP what these "very careful" estimates are going to accomplish.

Estimate time, but don't really care if it's spot on
Just make sure you are careful and estimate tasks thoroughly. Basically you don't really measure time, because it's more error prone. The best way is to use tasks' time estimates as story points. This way you will gain:
If your time estimates are off, research shows that they tend to be consistently off (accuracy factor doesn't change too much), so time estimates can easily be used for story points calculation.
If you empirically managed to do x number of story points in previous sprint, you'll probably achieve similar results this time round even though your time estimates are incorrect.
You will have to be rather good at estimating all story tasks. Otherwise your sprint story points tend to grow during execution and you won't meet your deadline - even though your velocity will remain practically the same
Estimates can change but similar to #3, keep some sprint slack time for these changes to meet sprint deadlines (demo day).
But keep time estimates to actually see which tasks must split or join.

We track both the time spent working on the tasks, and the time remaining to complete them. The remaining time allows to determine the progress made during the Sprint, and to anticipate whether we will be able to achieve the Sprint goal. We update the remaining time for the tasks, adjusting it (sometimes increasing it) on a daily basis.
The time spent is - supposedly- for micro management. It also gives the team a chance to get some feedback on the accuracy of the estimates - and to get better at estimating - and to show how interruptions prevent the team to work on the Sprint backlog and therefore, slow it down.
In the Scrum process, individual deliverable goals are called Backlog Items, and can be seen as bucket of tasks. The Backlog Items are prioritized by the Product Owner, estimated by the Team, first as a whole and then task by task. Content, scope, priority and estimation of the Backlog Items can be revised.
We estimate both the Backlog Items and the tasks in time units (days or weeks for the Backlog Items, hours for the tasks) and we apply a focus factor (ratio of time dedicated to work solely on the Sprint tasks) to account for time not spent working on tasks to achieve the Sprint goal.

With respect to time tracking, what you're looking for is a burndown chart.
Fredrik explained what a burn down is, without using the term. Essentially, you regularly reestimate the time remaining for a particular activity.
So to your question of whether or not we track time spent, not necessarily. Scrum likes to work with time remaining instead. (You could substitute hours with story points, the principle is the same, as Robert explained.)
To your second question of whether you can adjust your tasks and estimates, most definitely yes. Agile follows the 'reactive to change' philosophy; you prioritize what's most important to the customer.
However, some teams to prefer not to add/delete/re-prioritize tasks in a particular sprint once it's begun, since that is almost an ad-hoc way of working, and even scrum requires some structure and discipline.
The statement "thus once a project has started we cannot add new tasks or adjust the hourly estimates on those tasks." is almost certainly not in the spirit of agile.

We use the Pomodoro Technique to track the time remaining. One of its advantages is that the amount of time spent is recorded in a disciplined way.
After estimating stories in story points, we estimate tasks in terms of pomodori, and use this estimate (which may be reestimated ad hoc) to judge the amount of time remaining. At the end of the sprint it's easy to see which tasks we originally estimated the least accurately and improve how we estimate in the future, due to the way we mark the number of pomodori estimated and completed on each post-it.
In terms of the sprint, the estimated hours remaining are just a measure of progress so we can see where we are burndown-wise. They're a clue to whether we're on track or not. The score that matters is story points completed.

By definition, an item is done when all of the tasks that need to be completed in order to fully implement that item have 0 hours left. What you need to track inside the sprint is remaining hours on remaining tasks. Not hours spent on a task. Why? Because our knowledge of how long something will take is imperfect and we gain little by trying to come up with a super-accurate estimate when we should be working on the product.
You are always allowed to add tasks under a sprint backlog item as you identify more work that must be done to fully implement the item, and you should update the remaining hours to completion daily (or set them to 0 once you've completed the task).
You should tell your VP that knowing when you're going to ship the product based upon your most accurate information (today) is far better than setting a number/making an estimate in the past and never updating it. This doesn't mean re-estimating user stories (don't do that until the end of the release), it means updating the sprint backlog with new tasks, and the best estimate as to when active tasks will be complete in remaining hours.
BTW, the way to work on accurate estimates is to plan your release using story points, create an iteration plan based upon your estimated team velocity, and then to continually update the iteration plan based upon the output at the end of each sprint. After a very few sprints you will get a very accurate idea of the actual team velocity, making it easy to forecast when you will ship your release with the desired scope... or what scope should be completed by the original ship date. Using actual project data from your current project to predict project completion is a software engineering best practice, because it is the most accurate way to make a prediction.

Related

How to manage agile development when the team is not stable? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
I have been using agile approaches (XP and Scrum) for my projects for several years with great results. But in all cases, all members of the dev team were committed 100% to the project.
Now I am faced with doing this when the team is not stable. For instance, one iteration there may be four people working, the next maybe only two or three.
I realize this makes it hard (or impossible) to estimate using the normal velocity approach since it will fluctuate to much and not be stable. What follows is that one cannot really expect to be able to release at the end of each iteration.
Maybe another approach is needed here. Just grab stuff from the backlog and just muddle through and release whenever it is possible. I really don't like that though...
Any thoughts?
From the question I assume you have some developers (probably, 2) 100% commited to the project and some (another 2-3) only participate at a times.
One thing you can do is set different process for core developers who are 100% commited and everyone else. Use you normal agile process for core people and release their work at normal iteration cycle. For non-core people, do little planning and assume their (and your) estimates would be way of at a times. Ideally their changes should be isolated and merged into stable branch of code by core members, but not every project's architecture and team roles allow this.
The point is to separate and isolate source of chaos and leave the heart of a project and team unaffected.
Maybe instead of agile approaches, you can slow things down with other iterative and incremental approaches. Instead of having iterations measured in weeks, having longer iterations (perhaps measured in months) would be better if you keep adding and dropping people from the team.
This doesn't mean that you still can't use some Agile techniques. I would still maintain your Backlogs and burn down charts, with the realization that instead of having a release every 2 weeks, you'll release every 6 weeks (~2 months). If you have new developers joining more experienced developers, use pair programming, assign the new developers to bug fixes, or assign the new developers to maintaining unit tests to help them learn the code base.
Velocity is only an estimation.
Naively, if you have a given velocity v with a team of 4 developers, then schedule your iteration with a velocity of (v/4)*number_of_developers
You can fudge this value if the members you are losing are particularly stronger or weaker than the average.
This is basically what PivotalTracker does with its team strength metric.
So, you have a project with a continually changing team size and your boss wants you to give him an accurate estimate of how long it will take? You can do this, as long as you keep in mind the difference between accurate and precise. Your precision will depend largely on the number of items and how granular (decomposed) each item is; the more items you have the more the Law of Large Numbers works for you, averaging out over- and underestimates.
Your accuracy is a function of confidence. Note that estimates aren't single-point values, they're a range with numbers with a percentage of confidence. For instance, a proper estimate wouldn't be "2 weeks" it would be "50% confidence of 2 weeks, 80% confidence of 4 weeks."
If I were the person assigned with the unenviable task of providing an estimate to completion for a project being managed as arbitrarily as in the original post, I'd try to figure out a range based upon the minimum number of folks assigned (e.g., "48 to 66 weeks given 2 developers [50% to 80% confident]"), and a range associated with the average number of folks assigned (e.g., "25 to 45 weeks with 5 developers [50% to 80% confident]"), and use the low figure from the average number along with the high figure from the minimum number (e.g., "25 to 66 weeks given anywhere from 2 to 5 developers [50% to 80% confident]"), and even then I'd put a disclaimer on it ("plus 10% for the lost time due to context switching").
Better yet, I'd explain exactly why this arrangement was, to be polite, sub-optimal, and why multi-tasking is a primary signpost on the road to project Hell.
As someone else suggested, changing the workflow from iteration-based to flow-based (Kanban) might well be a good strategy. With Kanban you handle changing project priorities by changing the priority of items in the backlog; once an item has been grabbed by the team it is generally finished (flows all of the way through the workflow, stakeholders aren't allowed to disrupt the team by screwing around with work-in-progress). I've used Kanban for sustained engineering projects and it worked very well. Re how it would help with estimates, the key to continuous flow is to try to have each work item be roughly the same size (1x, 2x, 3x, not 10x, 20x, 100x). You should track movement of items through the workflow by tracking dates of process state changes, e.g., Queue 1/15, Design 1/22, Dev 1/24, Test 2/4, Integrate 2/7, etc., and then generating a cumulative flow diagram regularly to evaluate the time-in-state durations over time. Working out how long the project should take given that you know the size of each item and the time through the workflow for items is a trivial computational exercise left to the reader. (The more interesting question is how to spot constraints... and then how to remove them. Hint: look for long times in states, because work piles up in front of constraints.)
Let the individual developer that will be working on the story estimate the effort required to complete the story. You can take into account historical variances in that developer's estimations, but the idea is that you can take their estimates and then figure out how many stories you'll be able to finish in that sprint.
Don't forget that average velocity is largely used for lookahead release planning; the team is responsible for selecting in each iteration how many backlog items to take on (although knowing historical velocity can assist them).
If your team size (and hence velocity) is fluctuating from iteration to iteration, you can still do useful release planning by using average velocities over the past N sprints, assuming the team fluctuations will continue and hence their long-term average velocity will actually be stable.
Your main problem here is that the team will find it hard to give predictable estimates and deliveries since the team is changing from sprint to sprint. This can also hurt the team commitment and continuous improvement.
This case might actually be well suited for a Kanban approach. Check out Henrik Knibergs introduction to Kanban for a quick overview.
Good luck!

Story estimates in Scrum [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
We started a project that will be managed with Scrum/XP. We wrote the whole product backlog upfront for evaluation purposes. We're making sure all stories are customer-centric and we're evaluating them by
story business value: MoSCoW technique - Must, should, could, would/won't have this implemented
story effort/complexity (= story points): 1, 2, 3, 5, 8, 13, 21, 100 - related to story complexity/effort rather than ideal days duration
100 story points may have some stories with Would/Won't have because they actually are bigger complex stories that will be broken down later if needed.
Calculated story importance is based on value&effort by not overlapping MoSCoW stories.
But without 100 point stories our stories so far (also broken down) has complexity between 2 and 8 which we think is an appropriate story size to avoid micromanagement. But some stories became related or dependant on each other. We have stories that may take more if done first, and less if some other story would be done before them.
Questions
Is it possible to adjust story points later on during development, as we can do with story tasks where we can re-evaluate them, add new, remove existing or is this not the case with stories? Because changing their complexity, will also change end date estimates based on planned velocity. What's the best practice in this case?
You absolutely can estimate your stories again and you should. The points are only locked when the team commits to them at the Sprint Planning Session immediately prior to the start of a Sprint.
One practice I've used is when doing the individual Sprint Planning you should evaluate each story again. The team learns over time and will become more accurate with estimates and identifying dependencies. Remember what goes into a Sprint is up to the team, the product owner defines the overall backlog. If the project is time bound don't try to make the estimates fit the End Date, if you do you are setting yourself up for failure.
Remember that with Velocity you start with a guess at what you can accomplish. It usually isn't until the 3rd or 4th Sprint that you hit identify a realistic Velocity that the team can manage. Yes this does mean that you may have assumed the team could deliver 20 points per Sprint and actually can only do 15 points. Yes that means delivery time goes out or stories fall below the cut line.
As for dependent stories you should work with your product owner. If the team talks to them you can usually rearrange stories. Most people are receptive to someone that tells them "If we do A now it will take the full Sprint, but if we do A later it will take 15% of a Sprint" that makes it pretty convincing.
A useful practice to try is scheduling the stories within the Sprint. During the planning session once all stories are validated and discussed the team pulls up a calendar and discuss when they want to have things done. By putting target dates on a calendar it helps to identify overlaps and dependencies between the stories. This can identify things that are serial in nature and may cause a Sprint to fail.
Hope this information is helpful.
From your explanation you're doing a great job already. Of course there will always be stories with a dependency. Some may not even have directly visible customer value; i.e. the initial effort to set up an architecture and some frameworks). But if you leave them out you'll create a lot of technical debt. If you can, I'd suggest that you try to make the equation complete and somehow show the relation between the tasks.
For instance:
- task 3 is 8 points if done after task 2, but 12 points if done independently.
This way the product owner will feel the pain of ignoring dependencies, but can still make a choice to do the most valuable stories first. If the product owner is sure that all of the stories make it in the next sprints, then you can steer to have them implemented in the most efficient order. For instance, by blocking items for which dependencies have not been fulfilled (i.e. you can only have the 'change my logo on website' feature after the story 'webenabled version' is completed.)
Good luck!
I can only describe my expirience.
When we were planning first sprint we decided that we could accomplish 18 points. So we took several stories and total estimation were 15 points. As I mentioned above we were making our first steps in scrum and that's why we decided that 3 unused points and form-factor 0.6 guaranteed our success.
But our estimations of each story were only approximate. We had also some dependent stories. And we didn't make plan of implementation of each story because we thought that it's unnessecary with agile methodology.
As a result we failed our first sprint with only 8 complete points.
Before our second sprint I decided that we should take something from old good simple cascade and iterative methodoligies (and I was a scrum-master). So, on our next spring planning to make correct estimations we planned each story (about 20 minutes per story), with simple diagrams, all dependencies, details of implementation and so on. The planning was difficult and it took 2 meetings.
But the second sprint was much better and we've done almost all (actually we've done all but with some bugs). I think that we'll take less form-factor in 3rd sprint and it'll be successful.
There are some patterns that would help you in splitting User Stories in a way that they would remain INVEST, which means you would try to save dependencies, size, testability and value in particular. You can read more about that here: http://www.richardlawrence.info/2009/10/28/patterns-for-splitting-user-stories/ Richard is actively applying and improving them, and he is not alone ;-)
Just be aware that splitting and keeping dependencies (which is like creating a critical path in a Gantt chart) is going to trump the capability of the team to be creative, and to negotiate on those stories, and might also hide a "non-valuable-proposition".
HTH
ANdreaT

What is better: set up underestimated or overestimated deadlines? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Suppose you are a project manager. You can estimate an effort in days for specific task for specific developer. After performing estimation you obtain some min and max values.
After this you delegate a task to developer. Actually you also set up deadline.
Which estimation is better to use when set up deadline: min or max?
As I see min estimation can result in stress for developer, max estimation can result in using all the time which is allocated to developer even if task can be complete faster (so called Student syndrome).
Which other pros and cons of two approaches?
EDIT:
Small clarification: I speak about setting up deadlines for subordinates when delegating the task, NOT for reporting to my boss.
EDIT:
To add one more clarification: I can keep in mind my real estimation, provide to boss slightly larger estimation, to subordinates - slightly smaller.
And this questions touches the following thing: is it good idea to provide to developer underestimation to make him working harder?
You should use the best guess which is a function of the min and max estimates* - not just the simple average -
best_guess = (min * min_weighting + max * max_weighting) / divisor*
* Tom Neyland suggests it should be (min_weighting + max_weighting). Actually I'm not sure whether that is correct, but it's probably more correct than my original divisor of 2.0.
The weighting you give to the min and max values will depend on the complexity of the task, the risks associated with the task, the likelihood of the risks occuring, the skill of the developer, etc. and will vary from organisation to organisation and from project to project. If you keep a record of your previous estimates and the actual time each took you'll be able to refine these estimates over time.
You should also use these values, plus a confidence value, when talking to senior management and customers. While giving the max and delivering early is not the same as giving the min and delivering late, it still shows that you don't have control over your development.
Giving the confidence value and an idea of the risks will also help manage expectations so if there are problems they're not unexpected.
* These min and max estimates will be got by various means - asking the developers, past experience etc. If polling developers then the actual min and max values should be treated as outliers and either discarded or modified in some way. What I mean here are the values you get from phrases like "it'll take 2 weeks if all goes well or a month if we hit some snags". So the values you plug into the formula are not the raw numbers.
Use neither min nor max but something in between.
Erring on the side of overestimation is better. It has much nicer cost behavior in the long term.
To overcome the stress due to underestimation, people may take shortcuts that are not beneficial in the long term. For example, taking extra technical debt thast has to be paid back eventually, and it comes back with an interest. The costs grow exponentially.
The extra cost from inefficiency due to student's syndrome behaves linearly.
Estimates and targets are different. You (or your managers and customers) set the targets you need to achieve. Estimates tell you how likely you are to meet those targets. Deadline is one sort of target. The deadline you choose depends on what kind of confidence level (risk of not meeting the deadline) you are willing to accept. P50 (0.5 probability of meeting the deadline) is commonplace. Sometimes you may want to schedule with P80 or some other confidence level. Note that the probability curve is a long-tailed one and the more confidence you want, the longer you will need to allocate time for the project.
Overall, I wouldn't spend too much time tracking individual tasks. With P50 targets half of them will be late in any case. What matters most is how the aggregate behaves. When composing individual tasks estimates into an aggregate, neither min or max is sensible. It's extremely unlikely that either all tasks complete with minimum time (most likely something like P10 time) or maximum time (e.g. P90 time): for n P10/P90 tasks the probability is 0.1^n.
PERT has some techniques for coming up with reasonable task duration probability distributions and aggregating them to larger wholes. I won't go into the math here. Here's some pointer for further reading:
Steve McConnell: Software Estimation - Demystifying the Black Art. It's quite readable and pragmatic but at least the 1st edition I have has some quirks in its math and otherwise.
Richard D. Stutzke: Estimating Software-Intensive Systems - Projects, Products and Processes. It's a little more academic, harder read but for example explains the math better.
Ask for best, likely and worst case scenario estimates instead. Then use Program Evaluation and Review Technique. However you may want to take a look at some PERT critique first.
For individual tasks or tasks making up the critical path it’s simply not prudent to go for the best case estimates. It’s like saying that the project is absolutely free of any risk and uncertainty. If the actual job turns out to be anything but the best case scenario you’ll end up blowing the schedule. It’s better to end up with some extra time on your hands and fill the time by implementing some nice-to-haves as opposed to having to work nights and weekends.
On the other hand if managers mostly went for the worst case estimates and in software world they can easily be an order of magnitude greater than the best case figures most projects would never make it past the feasibility and planning stage. Not all of the risks going to materialise.
Going for the best case estimate won't help fighting student syndrome. Include interim milestones and deliverables instead, beside being helpful at combating the student syndrome they're pre-requisite for having a trustworthy data on the project progress and uncovering early any potential issues.
If the difference between min and max are big rather than using some black magic formula I think it the best thing to do would be to go back to the developers and ask them to do a finer breakdown and prototyping, which will lead to better estimates where the gap between min and max is not that big.
Note to the question: In my opinion, the estimates should be done by the developers/architects since they have the best technical knowledge to be able to break down into tasks and estimate those tasks.
If you are estimating for a specific developer, and you know your estimates are generally accurate for that developer, then the min value is the logical deadline (initially). In the course of the project you will adjust deadlines according to circumstance.
If you have little experience with a specific developer, one of my fondly regarded previous managers would ask the developer himself to do the estimate and set the initial deadline a third of the distance between that developer's min and max, challenging the developer to beat it.
Something which has been missing in many of these answers (perhaps because it's slightly off-topic) is frequent updates. With younger/newer developers this is even more important - read the code they commit, and/or check in daily to ask them for specific, detailed reports.
This also allows you to set tight deadlines for developers without giving them too much stress, because they will know you're around to help adjust deadlines when needed.
Frequent updates give you the most important tool in setting customer/management expectations - early warning of issues which might delay things, and I prefer having that over any formula.
Is the developer going back into a cave to develop this or is there a good chance of changing requirements over the course of the project? I would think most projects will have a good chance that something won't go smoothly and thus it may be better to try to get the prototype up sooner rather than later.
As for the initial question, I think I'd break this out into a few different outcomes and consider each:
Gross underestimation -> This leads to the problem that there is still a lot of work to do and the manager appears unable to make reasonable estimations.
Minor underestimation -> In this case, either there is an extension, scope gets cut or some bugs are in the release, but this is better than the previous case.
Made the deadline, on time and on budget with quality -> While this may seem optimal as everything worked out, I don't think this is the best result possible.
Minor overestimation -> In this case, there is some breathing room that means either things finish early or some extra work is added. A point here is that this may seem to deliver a slightly better result than the previous case like how some companies will try to beat the earnings estimate by a small amount to do better than expected.
Gross overestimation -> I think this would be the worst case outcome though it is similar to the first in terms of someone being way out of their league in being able to provide a reasonable estimation.
That's just my opinion on each and others may have a different take on it than me.
If you're trying to hold developers to their minimum estimate, that's foolish. No one, in any industry, consistently hits their minimum time estimate for getting something done. Eventually, they'll just learn to pad their minimum estimates significantly, and then they'll never hit the old minimums, because all estimates will be above that.
In Agile/Scrum, you don't set firm deadlines, but set "how many hours left on this task". Every day, you update the amount of time left. You do not track hours spent, but do track estimated hours remaining, and you try and stay honest about it.
If you have lazy developers, this is bad, because they can easily game that system. If you have developers that are worth their salt, this is great. They get better at estimation pretty quickly, and you - as a project manager - learn how reliable their estimates are, and you'll have a much better feel for what estimates to pass up the chain based on the individual developer estimates.
Go slightly towards Agile, fire the bad developers as you discover which are which, reward the good developers for actually giving a damn, and have a more productive, happier team while being able to report more accurate expectations to your superiors.
If in doubt under promise and over deliver: you want to be the person who is delivering more than they were expecting, not less. Based on this always go with the higher of any estimate.
Slightly more complex:
For a given potential delivery, if you plot the delivery times against the chances of them being happening, you're going to get a curve which is a variation of a normal distribution, and you can assume that a developers minimum estimates are going to be somewhere towards the left of the curve and their maximum towards the right.
The area under the curve to the left of the single number you select as your estimate represents the probability of you successfully delivering on or before that estimate. So if you give a number at the very left hand side your chance of hitting is effectively zero, if you give a number at the very right hand side your chance is effectively 100%.
What is less commonly realised is if you give the mean value (assuming your min and max averaged out give something approximating the actual mean) you'll only hit that deadline 50% of the time. Effectively if you use the mean you're going to miss the deadline half the time. I don't know about you but I don't like being seen as the guy whose misses half his deadlines.
So you want a number which is going to give you something you hit, say, 90% of the time. Conveniently 95% represents the mean + two standard deviations but if you can't be arsed to calculate that (and most of us probably don't have the data) my experience says that:
(3 x max + 1 x min) / 4
gives a reasonable result.
Incidentally, what you tell the developer is the deadline is another question entirely. Personally I'd give him somewhere around ((2 x max + 1 x min) / 3) and have the rest as contingency.
What are you using the estimates for? Specifically, why will the developer feel stressed if you normally underestimate?
If you're trying to schedule how long something is likely to take, you go for an intermediate value. Probably on the long side, since people normally underestimate. In any case, you shouldn't be using these estimates as firm objectives for developers, and so they shouldn't be overly stressful.
If you're using these estimates to set up commitments, you need to err on the side of overestimating. Giving developers insufficient time leads to burnout, unmaintainable buggy code that doesn't do quite what the user wants, and low morale and high turnover. Set the commitments to be reachable, and encourage the developers to finish early.
This depends on project.
Some projects may require fast development and there's no alternatives if deadline is already set and there's no good chance to prolong development. Typical issue: marketing campaign resulting in new service. Such deadline can be enough for normal development, but in some organizations it is so close, that developers work in stress and make many errors that are fixed during production stage. That's a kind of project when developers have to work with topmost effectiveness and they'd better get good reward on success.
Some projects are accurately planned and here you can use all analytics you have: history data, some developer's time metrics on subtasks, calculating risks, etc.
But anyway MAX time shouldn't be used: its the most inaccurate measure that usually leads to even more time taken. And here's a simple reason: when developer just gives away this MAX, he almost doesn't measure. He just gives away his intuition that has very little info at the time. But if he'll spend at least half an hour he'll understand specifics of his tasks, he even may split it into subtask and increase his accuracy. So you can give developer some bias like "hey, guys, just think in what time you would provide stable code here" but send him measure himself. It is good for a job, it is good for a programmer himself.
The first mistake most estimators use when setting the deadline is assuming that the dev will be full-time every day on that task which is a disastrous mistake. This can result in not meeting the deadline even when you use the over estimate to figure out the deadline. Being under the hours but past the deadline you told the client is a big problem. People take leave, get sick, have jury duty, have to go to required meetings on some new HR policy, get called over to help on another project when someone is stuck, have to load software on a new computer when their old one breaks, have to research a production problem on code they recently deployed, etc. If you are estimating more than 6 hours a day on the project per person, you are already in trouble on the deadline before the project starts. When I did manpower studies, we used a figure that equated to just slightly more than 6 hours a day of direct work when calculating out how many people were needed for any job. And we did a lot of statistical analysis as the basis for the figure we used.
I think you have to decide which of these to use on a case by case basis. We have some projects that we know the max estimate is still probably a little low (usually when someone in management couldn't face the client with the real estimate), we have others where we are doing something new where we know the estimates are more likely to be off, in these kinds of cases go with the max. But for work you've done before that is well-defined and you know the dev assigned won't be learning new skills, then go closer to the min (but never actually use the min, there are alawys unexpected bumps in the road). ALso the shorter the project, the more likely you will be able to meet the min, it is far easier to get a good estimate for a week-long project than a year-long one.
More importantly is changing the estimate and deadline every time the circumstances change. If the client adds work, the extend the deadline and estimate, don't just do it. If your best dev quits and you have to put someone new on the project, extend the deadline becasue that person has to have time to get up to speed (you may have to eat the hours though, the client may not agree to pay for that time. Critical to this is telling the client right away. They tend to be better about moving a deadline (although not happy) than they are about missing one or making the dealine but the product doesn't work as they expect it to. Too many project managers just like to wish a problem is going away and the won't have to face that conversation with the client. But usually when they do finally have to tell him it is a much worse conversation than the difficult one they tried to avoid.

Need tips on how to prioritize and schedule a bunch of work items [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
It took me some time, but I've finally managed to write down all the tasks that need to go into Version 1.0 of the software product I'm working on.
The list is almost 1000 items long.
We are a 3-person team, and we've somehow managed to get this far using MindMeister, Google Docs, #todos in the code etc. Now, I have everything neatly grouped by feature, but how do I prioritize all this and turn it into a schedule?
Any advice would be greatly appreciated - I'm not looking for software recommendations, however - I'm seeking advice on how to take this enormous bag of tasks - ranging from bug-fixes to application modules - and find out in what order I should do them.
Prioritize ruthlessly. 1000 action items is a lot, and the odds are that as you go you'll modify some, toss others, and add new ones. Your list will not survive the things you learn by actually building the software, and if you don't do the most important stuff first, you'll end up with a mess.
For every item or feature, you have to answer the question: Can the product be at all usable or useful without this? If yes, it can wait; everything else goes to the head of the queue.
After that, I like to group milestones by focus: I'll do a features milestone (or multiple ones if there are natural small clusters of features), a UI milestone where I'll focus on AJAX/rich client interactivity, a performance milestone where I profile and do database & server tuning, etc. Or break them up some other way - but definitely break them up. Work in smaller bites with specific focus for each iteration, and make sure each iteration is solid before moving on.
My recommended approach will be based on Agile methodology best practices...
So, you have what in Agile terms is called a "backlog" defined- that's great - and an important first step.
A good Agile pace that is commonly used is a 2-3 week iteration length...and at the end you have a set of releasable features. This will establish the "heartbeat" of your development process. Next, you'll decided how to organize and group the features into Stories and Tasks.
You'll want to grow the underlying architecture and let it naturally emerge based on the ordering of the Stories and Tasks that you select from your backlog.
Its important to mitigate risks early - so you'll want to select early those items that are either performance or implementation unknowns that might pose the largest risk - and could result in the largest rework impact. For example - establishing the messaging infrastructure - might be an early architectural feature that might be included if you select a Story that required a persistent message to be delivered to complete a unit of work.
Can you group the set of features into functional categories that might naturally evolve to describe the 1.0 release as a System of Systems? For example, the Administrative functions, the User Profile Management, Reporting, external integration layers, Database Access Objects, etc.
What are the simplest Story / Use Cases that you can write - that will map to some of the ~1,000 features / requirements you've defined? Select a set of Stories (or individual Tasks from a Story - if the Story itself is too large to implement in a single interation). It will take some additional effort - but recomposing your requirements into a set of Stories/Tasks is important.
You'll find that you will refactor during subsequent interations - but that your steady 2-week heartbeat iteration schedule will keep delivering real functionality.
At various points you may want to schedule an architecture iteration just to focus on some cleaning-up / refactoring - and that's ok too.
Since you are indicating that all these items are required, I will assume that there is not much chance of dropping items off the list (at least for now). Given that, you have 2 large tasks at hand - deciding when to do items, and determining how long it will take to do them.
Since you have already conveniently grouped the items by feature, I would start by prioritizing the features. Hopefully this will significantly reduce your working set, and allow you to actually get through it in a reasonable amount of time.
I would prioritize each feature based on its risk. Some things are easy to implement and others are difficult. Since they are all required, do the riskiest features first, when your schedule is more flexible to meet any unanticipated problems. Wait until the end of your cycle, and Murphy's law will strike you down.
Given your small team, I would just send the list of features around and ask everyone to mark it if they consider it a risky or difficult feature to implement. Add up all the marks and you have your "risk assessment", with the highest scoring items getting assigned first.
Alternatively, if you have easy access to your customer, ask them to rate the "risk" associated with each feature (in this case risk refers to the worst-case scenario of not having the feature - if not having something would be annoying, it is not risky. If not having the feature would result in them not using your product, it is high-risk).
Now that you have a priority queue, it is time to estimate. For the initial estimates, I would simply do an order of magnitude estimate for each of the features. Since it sounds as if you have already broken the features up, you should be able to get a decent feel for whether something is going to take hours, days or weeks. From the sounds of it, you are still early in development, so I don't believe there is much point in trying to get an accurate estimate on something that won't be implemented for another month or so.
As you pull items off your queue, have your team provide more accurate estimates by identifying granular tasks that shouldn't take more than a few hours. If you want to refine your order of magnitude estimates, you can progressively provide quick estimates for the remaining tasks based on your up-to-date knowledge of the system.
This should provide you with a fairly accurate short term schedule, and a fuzzier long term schedule that will progressively get more accurate.
Finally, if you are facing a long development cycle, I would recommend you identify certain target goals or dates, and when you meet those goals, sit down and repeat this whole process. I would never go longer than 2 weeks without revisiting these things. New items will get added, others will get overtaken and become obsolete, and others will become higher risk as you better understand the problem. All of this must be taken into account.

Agile - Task Breakdowns - to estimate or not? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
During our iteration planning, we frequently find ourselves in the same position as this guy - How to estimate a programming task if you have no experience in it
I definitely agree with prototyping before you can give a reasonable estimate. But the same applies to anything that needs a bit of architecture and design - but I'm not that comfortable doing all this outwith the scope of a sprint.
The basic idea is that you identify as many tasks as you can that you're confident of, and estimate these as normal. For those areas that you're unsure of then there should be two 'types' of task identified: Investigation & Implementation.
Investigation tasks are brief descriptions of work that you're just unsure of, for example "Investigate how to bind Control X to data". An estimate is provided for these.
The Implementation task is a traditional rough guess, probably based on the story points assigned, of how long you think it would take to implement the feature.
During the sprint, when the investigation tasks have been completed, the developer should then be at a stage where they have a much better idea what is going on. 'Proper' Tasks can then be identified, which take the place of the Implementation placeholder. In addition, further Investigation tasks may be identified at this stage, and the cycle continues.
In the above example, we start with an Investigation task at 7 hours and an Implementation task estimated at 14. Once the first Investigation has been completed, Tasks 1, 2 and 3 will be identified and estimated with some degree of certainty, where Task 3 is another Investigation task from which Task 4 and 5 will be identified at a later stage. As you can see, the first Implementation estimate had delivery of the feature within 14 hours - but the reality is it took at least 4 + 7 + 3 + 4 + 2 = 20. A third more than the initial estimate.
alt text http://www.duncangunn.me.uk/myweb/images/estimate.png
All thoughts are welcome - my gut instinct is this will fly - am I right or am I the Wrong Brothers?
Cheers!
What we do.
Some features involve new technology. We can't accurately estimate them. Period.
We make up a number. Based on a couple of things. How hard does it "feel"? Can we get by with some kind of "partial" or "just-enough" implementation?
If it's hard, then it's hard. It will be expensive.
If there's a lot of parts, with a kernel of goodness and some bonus stuff layered on, we have a possibility of putting just the kernel into a release, and setting other stuff aside for later. A very few things are "all or nothing" where a partial release isn't possible. In that case, we have to provide enough time for "all", and that gets expensive.
Our standard approach is to get stuff that works, and possibly defer things to a later sprint if we ran of out time because of unexpected complexities.
What you're calling "investigation", we call technical spike sprints. For stuff that's new, we make up estimate number to placate managers who feel it necessary to overplan things. Then we spike the technology. Once it's spiked, we can revise the estimates based on what we now know.
Actually, the implementation of the feature took 27 hours - you forgot the first investigation of 7 hours, so in reality the actual implementation took almost twice as long as the estimate.
There are two ways you can go on this:
Just make the estimate as best you can and potentially experience a blowout in your sprint and a declined project velocity (you should only do this if the feature is both urgent and critical); or
Schedule the investigation for this sprint and leave the implementation for another sprint - without an idea of how long the task will take, the Product Owner does not have enough information to make a decision about in which sprint to schedule it or even whether to do it at all. Only tasks that have been estimated should be included in your sprint.
The first choice means your sprint and project estimates are somewhat arbitrary. The second choice gives much more predictability to your sprints.
In your example, the initial investigation may be scheduled for Sprint 1 but without knowledge of how long the task will take the Product Owner can't decide how to schedule it. If you came back with an estimate of 200 hours the Product Owner may decide not to do that feature at all, or to delay it until Release 2 of the product. The estimate comes in and the Product Owner schedules Task 1, Task 2 and the investigation of Task 3 for Sprint 2. After estimating Task 3, Tasks 4 and 5 can be scheduled in Sprint 3 or later.
Estimating feature usually is complex task. After some time your estimation will become better. But good approach can be that you estimate features with the story points. Story point is abstract value (meaning agreed among the team) that express complexity of the problem.
You should assign the same complexity (same number of story points) to the features of the similar complexity. Then later on it is enough to estimate only smaller set of features (or looking at the historical data) and you should be able to estimate how much time you need.
Features with the similar complexity need similar time effort for implementation.

Resources