Related
I am making a BI system for a bank-like institution. This system should manage credit contracts, invoices, payments, penalties and interest.
Now, I need to make a method that builds an invoice. I have to calculate how much the customer has to pay right now. He has a debt, which he has to pay for. He also has to pay for the interest. If he was ever late with due payment, penalties are applied for each day he's late.
I thought there were 2 ways of doing this:
By having only 1 original state - the contract's original state. And each time to compute the monthly payment which the customer has to make, consider the actual, made payments.
By constantly making intermediary states, going from the last intermediary state, and considering only the events that took place between the time of these 2 intermediary states. This means having a job that performs periodically (daily, monthly), that takes the last saved state, apply the changes (due payments, actual payments, changes in global constans like the penalty rate which is controlled by the Central Bank), and save the resulting state.
The benefits of the first variant:
Always actual. If changes were made with a date from the past (a guy came with a paid invoice 5 days after he made the payment to the bank), they will be correctly reflected in the results.
The flaws of the first variant:
Takes long to compute
Documents printed with the current results may differ if the correct data changes due to operations entered with a back date.
The benefits of the second variant:
Works fast, and aggregated data is always available for search and reports.
Simpler to compute
The flaws of the second variant:
Vulnerable to failed jobs.
Errors in the past propagate until the end, to the final results.
An intermediary result cannot be changed if new data from past transactions arrives (it can, but it's hard, and with many implications, so I'd rather mark it as Tabu)
Jobs cannot be performed successfully and without problems if an unfinished transaction exists (an issued invoice that wasn't yet paid)
Is there any other way? Can I combine the benefits from these two? Which one is used in other similar systems you've encountered? Please share any experience.
Problems of this nature are always more complicated than they first appear. This
is a consequence of what I like to call the Rumsfeldian problem of the unknown unknown.
Basically, whatever you do now, be prepared to make adjustments for arbitrary future rules.
This is a tough proposition. some future possibilities that may have a significant impact on
your calculation model are back dated payments, adjustments and charges.
Forgiven interest periods may also become an issue (particularly if back dated). Requirements
to provide various point-in-time (PIT) calculations based on either what was "known" at
that PIT (past view of the past) or taking into account transactions occurring after the reference PIT that
were back dated to a PIT before the reference (current view of the past). Calculations of this nature can be
a real pain in the head.
My advice would be to calculate from "scratch" (ie. first variant). Implement optimizations (eg. second variant) only
when necessary to meet performance constraints. Doing calculations from the beginning is a compute intensive
model but is generally more flexible with respect to accommodating unexpected left turns.
If performance is a problem but the frequency of complicating factors (eg. back dated transactions)
is relatively low you could explore a hybrid model employing the best of both variants. Here you store the
current state and calculate forward
using only those transactions that posted since the last stored state to create a new current state. If you hit a
"complication" re-do the entire account from the
beginning to reestablish the current state.
Being able to accommodate the unexpected without triggering a re-write is probably more important in the long run
than shaving calculation time right now. Do not place restrictions on your computation model until you have to. Saving
current state often brings with it a number of built in assumptions and restrictions that reduce wiggle room for
accommodating future requirements.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
I have been using agile approaches (XP and Scrum) for my projects for several years with great results. But in all cases, all members of the dev team were committed 100% to the project.
Now I am faced with doing this when the team is not stable. For instance, one iteration there may be four people working, the next maybe only two or three.
I realize this makes it hard (or impossible) to estimate using the normal velocity approach since it will fluctuate to much and not be stable. What follows is that one cannot really expect to be able to release at the end of each iteration.
Maybe another approach is needed here. Just grab stuff from the backlog and just muddle through and release whenever it is possible. I really don't like that though...
Any thoughts?
From the question I assume you have some developers (probably, 2) 100% commited to the project and some (another 2-3) only participate at a times.
One thing you can do is set different process for core developers who are 100% commited and everyone else. Use you normal agile process for core people and release their work at normal iteration cycle. For non-core people, do little planning and assume their (and your) estimates would be way of at a times. Ideally their changes should be isolated and merged into stable branch of code by core members, but not every project's architecture and team roles allow this.
The point is to separate and isolate source of chaos and leave the heart of a project and team unaffected.
Maybe instead of agile approaches, you can slow things down with other iterative and incremental approaches. Instead of having iterations measured in weeks, having longer iterations (perhaps measured in months) would be better if you keep adding and dropping people from the team.
This doesn't mean that you still can't use some Agile techniques. I would still maintain your Backlogs and burn down charts, with the realization that instead of having a release every 2 weeks, you'll release every 6 weeks (~2 months). If you have new developers joining more experienced developers, use pair programming, assign the new developers to bug fixes, or assign the new developers to maintaining unit tests to help them learn the code base.
Velocity is only an estimation.
Naively, if you have a given velocity v with a team of 4 developers, then schedule your iteration with a velocity of (v/4)*number_of_developers
You can fudge this value if the members you are losing are particularly stronger or weaker than the average.
This is basically what PivotalTracker does with its team strength metric.
So, you have a project with a continually changing team size and your boss wants you to give him an accurate estimate of how long it will take? You can do this, as long as you keep in mind the difference between accurate and precise. Your precision will depend largely on the number of items and how granular (decomposed) each item is; the more items you have the more the Law of Large Numbers works for you, averaging out over- and underestimates.
Your accuracy is a function of confidence. Note that estimates aren't single-point values, they're a range with numbers with a percentage of confidence. For instance, a proper estimate wouldn't be "2 weeks" it would be "50% confidence of 2 weeks, 80% confidence of 4 weeks."
If I were the person assigned with the unenviable task of providing an estimate to completion for a project being managed as arbitrarily as in the original post, I'd try to figure out a range based upon the minimum number of folks assigned (e.g., "48 to 66 weeks given 2 developers [50% to 80% confident]"), and a range associated with the average number of folks assigned (e.g., "25 to 45 weeks with 5 developers [50% to 80% confident]"), and use the low figure from the average number along with the high figure from the minimum number (e.g., "25 to 66 weeks given anywhere from 2 to 5 developers [50% to 80% confident]"), and even then I'd put a disclaimer on it ("plus 10% for the lost time due to context switching").
Better yet, I'd explain exactly why this arrangement was, to be polite, sub-optimal, and why multi-tasking is a primary signpost on the road to project Hell.
As someone else suggested, changing the workflow from iteration-based to flow-based (Kanban) might well be a good strategy. With Kanban you handle changing project priorities by changing the priority of items in the backlog; once an item has been grabbed by the team it is generally finished (flows all of the way through the workflow, stakeholders aren't allowed to disrupt the team by screwing around with work-in-progress). I've used Kanban for sustained engineering projects and it worked very well. Re how it would help with estimates, the key to continuous flow is to try to have each work item be roughly the same size (1x, 2x, 3x, not 10x, 20x, 100x). You should track movement of items through the workflow by tracking dates of process state changes, e.g., Queue 1/15, Design 1/22, Dev 1/24, Test 2/4, Integrate 2/7, etc., and then generating a cumulative flow diagram regularly to evaluate the time-in-state durations over time. Working out how long the project should take given that you know the size of each item and the time through the workflow for items is a trivial computational exercise left to the reader. (The more interesting question is how to spot constraints... and then how to remove them. Hint: look for long times in states, because work piles up in front of constraints.)
Let the individual developer that will be working on the story estimate the effort required to complete the story. You can take into account historical variances in that developer's estimations, but the idea is that you can take their estimates and then figure out how many stories you'll be able to finish in that sprint.
Don't forget that average velocity is largely used for lookahead release planning; the team is responsible for selecting in each iteration how many backlog items to take on (although knowing historical velocity can assist them).
If your team size (and hence velocity) is fluctuating from iteration to iteration, you can still do useful release planning by using average velocities over the past N sprints, assuming the team fluctuations will continue and hence their long-term average velocity will actually be stable.
Your main problem here is that the team will find it hard to give predictable estimates and deliveries since the team is changing from sprint to sprint. This can also hurt the team commitment and continuous improvement.
This case might actually be well suited for a Kanban approach. Check out Henrik Knibergs introduction to Kanban for a quick overview.
Good luck!
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
Note : Before asking this question I did an exhaustive search, and found little bits of the answer in various other questions, for example:
What is the best resource for
learning Scrum?
Scrum Process
Management - tips, pitfalls,
ideas
Two questions regarding Scrum
However, I feel like this question hasn't been directly addressed (if it has, please let me know).
Do you track time in Scrum as a function of hours/days spent on a task, or simply whether that task is complete or not? Can you adjust those tasks and estimates?
Background: Our new VP of development came from a Scrum environment, and so we're all learning about the process, but one of the things he has brought with him is the concept of very carefully quoting estimates of actual hours each task should require to complete, with the intention of getting more accurate with our estimates over time: thus once a project has started we cannot add new tasks or adjust the hourly estimates on those tasks.
But it was my understanding that agile practices, specifically Scrum, were based upon the concept of tasks being buckets that store individual deliverable goals, and you add/remove/adjust them as the clients' needs evolve after each sprint.
I realize this could potentially be argumentative, but I assume that viewing Scrum as a process, only one of those concepts is the "correct" philosophy for that system.
Do you track time in Scrum as a function of hours/days spent on a task, or simply whether that task is complete or not?
I track the estimated remaining work. This is a must have information. Without this, you can't draw the Burndown Chart. Without the Burndown Chart, you don't know "where" you are, you don't know if your Sprint is still on track or not. This would make this decision tool pretty useless. Yes, the Burndown Chart is not a tracking tool, it's a decision tool.
Can you adjust those tasks and estimates?
Sure!
Actually, the team owns the estimates, nobody else, and it is the job of the ScrumMaster to guaranty that this principle is applied. This should already answer the question. But there are other reasons.
As I said, a Sprint Backlog and a Burndown chart are decision tools and should thus be representative of where you really are. If you hide the reality, if you are not transparent, these tools won't help you to take any valuable decision, they will be useless. Think about it, what's the point of having good looking numbers if they are useless? What's the point of having a "nice looking burndown" if it doesn't reflect the reality.
So, during a Sprint, team members should obviously update the estimations of the remaining work as soon as they can do it (upward or downward). If a task estimate was initially 6h but the team discovers that more work has to be done and that the task will actually take 8h, the team should update the Sprint Backlog accordingly. If someone spent 4h hours on a task that was initially estimated at 4h but still need 2h work, these 2h should be reported on the Sprint Backlog. If the team discovers a task that has to be done but that wasn't identified, the team must add this task and its estimate to the Sprint Backlog. And being not accurate in the start is not a problem, as long as you update the backlog with the knowledge gathered over time. The sooner you make these updates, the sooner you'll be able to adapt and take decisions.
That said, it can be useful to keep the "initial estimate" and to compare it to the "actual time spent to complete". But not for tracking purpose, only to help the team to make better estimates. Actually, I would advice to not do this if you are transitioning to Scrum. There are often many other impediments to solve, many other things to improve first when you are learning Scrum values and principles. And if you do it, beware of the Waterfall daemons. Be ready to fight them, they may come back very fast.
The answers I see here aren't wrong, but I don't think they've really addressed your question.
I think you're asking, "Should I track the total hours actually spent on a certain task?" The answer is, "You can if you need to, but it isn't part of Scrum."
Scrum is a very lightweight process. It defines/requires only what is needed to make Scrum work. You can (and, in many cases, probably should) overlay other processes on top of Scrum in order to suit your organizational needs. For example, if tracking the total hours actually spent on a task enables you to better estimate similar tasks in the future (as it seems your VP wants), then that might be a good reason to track total hours, provided that it doesn't interfere with productive work too much. Or, perhaps you need to know the total hours for billing purposes. So just because Scrum doesn't require something doesn't mean you shouldn't do it.
However, for the purposes of Scrum itself, there is no need to track the total hours actually spent on a task. It is not needed for any of the Scrum artifacts, which only track the estimated amount of time remaining.
I don't know if our implementation is "correct", but what we do is:
Have Backlog Items added, which we put an estimated complexity number on (in relation to other backlog items).
Before each sprint, we go through the backlog items in priority order (prioritized by the product owner), break them down into tasks for which we make a time estimate (in hours).
When the number of available hours in the sprint are used up, the sprint is full
Then, during the sprint after each day of work we adjust the times on the tasks that we have been working on, so that they show the number of hours that we think is left before the task is done. This means that if I have a 6 hour task, work on it for a full day (we consider 6 hours a full day) and then feel that I still have 2 hours left before it's done, then I take down the "hours left" from 6 to 2. In case the task is time-boxed we need to check actual hours used instead, of course.
I have to add something here because
but one of the things he has brought with him is the concept of very carefully quoting estimates of actual hours each task should require to complete, with the intention of getting more accurate with our estimates over time: thus once a project has started we cannot add new tasks or adjust the hourly estimates on those tasks.
Is just plain not scrum so I don't know where your VP got his info. Tasks (know as Sprint Backlog Items) are not created until Planning the next sprint. They are created just in time and certainly not before the project starts. Before the project starts (Sprint 0), the Product Owner creates the Product Backlog and fills it with stories. He can add to it at ANY time during the project. It is his to manage. The team estimates these stories roughly against one another in story points or some other relative measure (ideal days?).
The estimating of tasks in hours is only a tool the team uses to figure out how many stories to commit to in the sprint and then to plot progress to predict success (burndown). Once a team has gelled and has a historical velocity; it may decide to not do any tracking in hours at all and just track their burndown in story points or # of stories. Estimating in hours is a form of waste in itself if the team does not need it to achieve commitment to the sprint goals.
I would ask the VP what these "very careful" estimates are going to accomplish.
Estimate time, but don't really care if it's spot on
Just make sure you are careful and estimate tasks thoroughly. Basically you don't really measure time, because it's more error prone. The best way is to use tasks' time estimates as story points. This way you will gain:
If your time estimates are off, research shows that they tend to be consistently off (accuracy factor doesn't change too much), so time estimates can easily be used for story points calculation.
If you empirically managed to do x number of story points in previous sprint, you'll probably achieve similar results this time round even though your time estimates are incorrect.
You will have to be rather good at estimating all story tasks. Otherwise your sprint story points tend to grow during execution and you won't meet your deadline - even though your velocity will remain practically the same
Estimates can change but similar to #3, keep some sprint slack time for these changes to meet sprint deadlines (demo day).
But keep time estimates to actually see which tasks must split or join.
We track both the time spent working on the tasks, and the time remaining to complete them. The remaining time allows to determine the progress made during the Sprint, and to anticipate whether we will be able to achieve the Sprint goal. We update the remaining time for the tasks, adjusting it (sometimes increasing it) on a daily basis.
The time spent is - supposedly- for micro management. It also gives the team a chance to get some feedback on the accuracy of the estimates - and to get better at estimating - and to show how interruptions prevent the team to work on the Sprint backlog and therefore, slow it down.
In the Scrum process, individual deliverable goals are called Backlog Items, and can be seen as bucket of tasks. The Backlog Items are prioritized by the Product Owner, estimated by the Team, first as a whole and then task by task. Content, scope, priority and estimation of the Backlog Items can be revised.
We estimate both the Backlog Items and the tasks in time units (days or weeks for the Backlog Items, hours for the tasks) and we apply a focus factor (ratio of time dedicated to work solely on the Sprint tasks) to account for time not spent working on tasks to achieve the Sprint goal.
With respect to time tracking, what you're looking for is a burndown chart.
Fredrik explained what a burn down is, without using the term. Essentially, you regularly reestimate the time remaining for a particular activity.
So to your question of whether or not we track time spent, not necessarily. Scrum likes to work with time remaining instead. (You could substitute hours with story points, the principle is the same, as Robert explained.)
To your second question of whether you can adjust your tasks and estimates, most definitely yes. Agile follows the 'reactive to change' philosophy; you prioritize what's most important to the customer.
However, some teams to prefer not to add/delete/re-prioritize tasks in a particular sprint once it's begun, since that is almost an ad-hoc way of working, and even scrum requires some structure and discipline.
The statement "thus once a project has started we cannot add new tasks or adjust the hourly estimates on those tasks." is almost certainly not in the spirit of agile.
We use the Pomodoro Technique to track the time remaining. One of its advantages is that the amount of time spent is recorded in a disciplined way.
After estimating stories in story points, we estimate tasks in terms of pomodori, and use this estimate (which may be reestimated ad hoc) to judge the amount of time remaining. At the end of the sprint it's easy to see which tasks we originally estimated the least accurately and improve how we estimate in the future, due to the way we mark the number of pomodori estimated and completed on each post-it.
In terms of the sprint, the estimated hours remaining are just a measure of progress so we can see where we are burndown-wise. They're a clue to whether we're on track or not. The score that matters is story points completed.
By definition, an item is done when all of the tasks that need to be completed in order to fully implement that item have 0 hours left. What you need to track inside the sprint is remaining hours on remaining tasks. Not hours spent on a task. Why? Because our knowledge of how long something will take is imperfect and we gain little by trying to come up with a super-accurate estimate when we should be working on the product.
You are always allowed to add tasks under a sprint backlog item as you identify more work that must be done to fully implement the item, and you should update the remaining hours to completion daily (or set them to 0 once you've completed the task).
You should tell your VP that knowing when you're going to ship the product based upon your most accurate information (today) is far better than setting a number/making an estimate in the past and never updating it. This doesn't mean re-estimating user stories (don't do that until the end of the release), it means updating the sprint backlog with new tasks, and the best estimate as to when active tasks will be complete in remaining hours.
BTW, the way to work on accurate estimates is to plan your release using story points, create an iteration plan based upon your estimated team velocity, and then to continually update the iteration plan based upon the output at the end of each sprint. After a very few sprints you will get a very accurate idea of the actual team velocity, making it easy to forecast when you will ship your release with the desired scope... or what scope should be completed by the original ship date. Using actual project data from your current project to predict project completion is a software engineering best practice, because it is the most accurate way to make a prediction.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I have read the agile manifesto and spend a nice day surfing the web in search for this elusive answer. But sadly I did not get an answer that would cover all the bases.
When watching all the blog posts and newscasts of Agile preachers, you just hear about open scope or open "time" projects. How do you apply this to a fix cost project?
From what I found out the biggest problem is scope management. How do you determine if something is not inside the projected scope and how do you formulate arguments for your decision? Because of the agile way you are implementing your software there is no detailed design to argue upon. In most cases you only have a vague wish-list that the customer hands to you. And is so general that you can interpret any feature into it.
And with the rising percentage of fixed-cost projects this seams to me to be a real issue.
So the questions would be:
How do you manage scope in a fix cost project?
How do you determine if the features wished for, are outside the original scope?
To me, the short answer about Agile and fixed price is that you can't do it, at least not with a fixed scope.
I know some people will say "that's not true, we are doing it" but, with all due respect, I don't think they are really doing Agile and I'll explain why. Actually the explanation is quite simple: fixed price implies fixed scope and is based on predictability where Agile is all about variable scope, scope management and adaptivity. So fixed price with fixed scope is basically the opposite of Agile.
With an Agile approach, fixed price gives you a number of iterations for a given team size. During these iterations, the customer will be able to have the team build the most valuable features first and thus to maximize the generated business value. The whole idea is then to stop iterating when the cost of an iteration is greater than the generated value. This is how Agile works.
So when people says they do fixed price with fixed scope in an agile way, they actually introduce some constraints that are not really compatible with the Agile theory - like doing an up-front estimation of a given set of features and freezing these features and estimations - and they loose important advantages of Agile (unless they have a perfect knowledge of the technologies and of the business domain and master them enough to predict everything but I know few projects that are like this).
Here is anyway a good compilation of various Agile contracts: 10 Contracts for your next Agile Software Project that might be helpful. But I think they all require some education of customers, especially the one that are used to fixed price with fixed scope (and late deliveries).
Scrum does not replace having proper requirements, or even having occasional major releases or milestones. Rather, it gives you a means to keep your team productive and focused, and avoids the time-wasting side-effects of a waterfall process.
In fact, one of the biggest advantages of an agile process like Scrum is that it causes you to "fail quickly and loudly" on problematic areas of your project. If, after a couple of sprints, your team still can't effectively estimate the time and resources needed to implement a particular feature, it may be worth pushing back on the requirements in that area -- they may need to be clarified, simplified, or scrapped altogether. In a traditional waterfall process, however, those "problem features" can often be pushed back to the last possible minute, resulting in the usual deathmarch and under-delivery into which most projects devolve.
However, the role of the Product Owner is even more critical in teams using Scrum who have a large set of requirements. Left to their own devices, most development teams will focus on the most interesting/fun/geeky features (service APIs, caching, search) first, and leave the "messy" stuff like payment process, UX design, and i18n until the last minute. A strong user voice is essential to making sure those features critical to the end user receive their fair share of attention.
Okay, this will not be the ideal answer you are looking for, but may help non-the-less.
For your first point:
With agile, and Scrum in particular, the style is suited toward changing specifications and unfixed deadlines using iteration patterns. To be able to manage this in a fixed scope project will be a nightmare. What one would normally do is set a budget for the specified scope, and any addendum to this would produce billable hours above and beyond the scoped budget. To do this in Scrum would be pointless, as the product backlog will be continually filled by the stakeholders. If there is no "punishment" for scope changes in a fixed budget, there will be nothing holding people back from just loading on to you.
The alternative here is to have fixed scope sprint successions, so for instance:
5x Sprints = x Cost with minimal scope change.
For your second point:
The use of Analysis and Design is an invaluable tool. By using use cases, event tables, sequence diagrams, state machines and the like; you will be saving yourselves oceans of tears in the long run. Basically, once the planning has been done, any addendum to this that requires additional (please note additional, not things that have been overlooked) use cases and large code changes will be out of scope. In fact, anything that was not overlooked in the planning and is not in your specification, is out of scope.
In closing, you will need to have very well planned documentation as well as very solid agreements with your clients to be able to pull this off 100%.
I hope this helps.
I worked in a environment where we had fixed cost and fixed time projects. We has switched to a Scrum-esque methology from a Waterfall/VModel methology. Scrum can work very well in fixed cost/time projects as the concept is that the customer is put in control, however for this to work you have to be able to somewhat accuratly determine what work is required and what it will cost (time, money, resource). And this is a situtation where Scrum in an ideal candidate.
You break down the wishy-washy wish list/requirements/screenshots into tagiable deliverables. E.g. a customer may say "I want ecommerce, with Paypal", you need to break this down into actual deliverables e.g. "1. Customer Registration and Login, 2. Product Catalogue, 3. Shopping Bag, 4. Payment, 5. Order Acknowlegment". At this stage, it's still impossible to determine how long it will take, and ofc we need to deliver all of the above in order to complete the project (i.e. you can't have Ecommerce without Payment). So break them down again, and again, until you have granular deliverables, genreally delverable within hours, maybe days, but certainly not weeks e.g.
1 Catalogue
1a View all Items
1ai View all items on 1 page with an image and item name underneath in a grid, 4 items per row
1aii View 10 items per page with paging
1aiii View a user slected number of items per page, with paging
1aiiii View all items on 1 page with an image and item name, descriptioon and price on the same line, 1 item per row
1b View by Category
...
1c Search
...
1d Attribute Filter
...
And so on, it can be done very quickly, and you can now probably guesstimate how long it would take todo x (ofc, I might break the above down even further, add more descriptive text to describe the work required, such as what persistant data stuctures Ill might need, the data in those structures, how data will be added, going further you might even desribe the required the begin and exit states).
Once you've go this, you'll notice that some features and depenant on others, e..g you can't have paging feature on a catalogue unless you have a catalogue to start witj, and the catagloge will require the CMS screesn to add and edit items etc etc. Highlight these 'can't live without feature' in whatever tool you using and this forms the core project, and within a day or two you have a bunch of features that can be developed somewhat standalone, with costs, which when added up make the cost of the project. And now the customer is in charge, they decide thay want to added a feature and increase the cost, cool, its up to them afterall.
All the above is obviously only a small portion of what scrum or any agile process is.
I don't think a fixed price contract with scope creep and a Scrum process are incompatible. You just need to agree up front with your customer how it will work. If you create your initial backlog with your customer, estimating as you go, you can use that as your basis for the fixed price cost and schedule. You can even agree to a rate of "X" story points equals "Y" cost and "Z" schedule at the beginning.
You then do the normal scrum thing, having the customer allocate stories to the current iteration, etc.
As the customer engages in scope creep, you work with them to add the "creep" as user stories to the backlog. Each time you add a new story, point out that for each X points added to the backlog, they will have to increase cost by Y and schedule by Z, or, they will have to give up story points of equal value. Since they are picking what you work each iteration, the points they give up (if that's the choice) will be the least valuable features. When your schedule runs out, you will be left with a backlog of the least important features that they can choose to drop or give you a new contract to finish.
The trick, of course, is to be good at estimating cost and schedule for each story/task ;-)
The project could be broken down into smaller parts and fixed rates could be attached to those. The other phases of the project could then be adjusted.
You have to be able to sell the agile process against your competitors. If a client has a history of fixed bid projects that were delivered on time, spec and cost, why would they waste their time taking bids from other developers?
Fixed Cost does not mean single sprint. Scope gets transfered to the Product Backlog, and as Sprints progress, scope is adjusted, negotiated and delivered. Scrum allows for rapid value delivery, and provides quick validation, and the opportunity to identify potential gold plating.
Scope change may result in the addition of backlog items, and the deletion of others. Its a balance of ROI vs the fixed budget provided.
If the scope does increase (and add value), and the cost is fixed, then the triple constraint (cost, time and scope) must be managed accordingly.
Remember that fixed cost does not mean fixed length.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
It took me some time, but I've finally managed to write down all the tasks that need to go into Version 1.0 of the software product I'm working on.
The list is almost 1000 items long.
We are a 3-person team, and we've somehow managed to get this far using MindMeister, Google Docs, #todos in the code etc. Now, I have everything neatly grouped by feature, but how do I prioritize all this and turn it into a schedule?
Any advice would be greatly appreciated - I'm not looking for software recommendations, however - I'm seeking advice on how to take this enormous bag of tasks - ranging from bug-fixes to application modules - and find out in what order I should do them.
Prioritize ruthlessly. 1000 action items is a lot, and the odds are that as you go you'll modify some, toss others, and add new ones. Your list will not survive the things you learn by actually building the software, and if you don't do the most important stuff first, you'll end up with a mess.
For every item or feature, you have to answer the question: Can the product be at all usable or useful without this? If yes, it can wait; everything else goes to the head of the queue.
After that, I like to group milestones by focus: I'll do a features milestone (or multiple ones if there are natural small clusters of features), a UI milestone where I'll focus on AJAX/rich client interactivity, a performance milestone where I profile and do database & server tuning, etc. Or break them up some other way - but definitely break them up. Work in smaller bites with specific focus for each iteration, and make sure each iteration is solid before moving on.
My recommended approach will be based on Agile methodology best practices...
So, you have what in Agile terms is called a "backlog" defined- that's great - and an important first step.
A good Agile pace that is commonly used is a 2-3 week iteration length...and at the end you have a set of releasable features. This will establish the "heartbeat" of your development process. Next, you'll decided how to organize and group the features into Stories and Tasks.
You'll want to grow the underlying architecture and let it naturally emerge based on the ordering of the Stories and Tasks that you select from your backlog.
Its important to mitigate risks early - so you'll want to select early those items that are either performance or implementation unknowns that might pose the largest risk - and could result in the largest rework impact. For example - establishing the messaging infrastructure - might be an early architectural feature that might be included if you select a Story that required a persistent message to be delivered to complete a unit of work.
Can you group the set of features into functional categories that might naturally evolve to describe the 1.0 release as a System of Systems? For example, the Administrative functions, the User Profile Management, Reporting, external integration layers, Database Access Objects, etc.
What are the simplest Story / Use Cases that you can write - that will map to some of the ~1,000 features / requirements you've defined? Select a set of Stories (or individual Tasks from a Story - if the Story itself is too large to implement in a single interation). It will take some additional effort - but recomposing your requirements into a set of Stories/Tasks is important.
You'll find that you will refactor during subsequent interations - but that your steady 2-week heartbeat iteration schedule will keep delivering real functionality.
At various points you may want to schedule an architecture iteration just to focus on some cleaning-up / refactoring - and that's ok too.
Since you are indicating that all these items are required, I will assume that there is not much chance of dropping items off the list (at least for now). Given that, you have 2 large tasks at hand - deciding when to do items, and determining how long it will take to do them.
Since you have already conveniently grouped the items by feature, I would start by prioritizing the features. Hopefully this will significantly reduce your working set, and allow you to actually get through it in a reasonable amount of time.
I would prioritize each feature based on its risk. Some things are easy to implement and others are difficult. Since they are all required, do the riskiest features first, when your schedule is more flexible to meet any unanticipated problems. Wait until the end of your cycle, and Murphy's law will strike you down.
Given your small team, I would just send the list of features around and ask everyone to mark it if they consider it a risky or difficult feature to implement. Add up all the marks and you have your "risk assessment", with the highest scoring items getting assigned first.
Alternatively, if you have easy access to your customer, ask them to rate the "risk" associated with each feature (in this case risk refers to the worst-case scenario of not having the feature - if not having something would be annoying, it is not risky. If not having the feature would result in them not using your product, it is high-risk).
Now that you have a priority queue, it is time to estimate. For the initial estimates, I would simply do an order of magnitude estimate for each of the features. Since it sounds as if you have already broken the features up, you should be able to get a decent feel for whether something is going to take hours, days or weeks. From the sounds of it, you are still early in development, so I don't believe there is much point in trying to get an accurate estimate on something that won't be implemented for another month or so.
As you pull items off your queue, have your team provide more accurate estimates by identifying granular tasks that shouldn't take more than a few hours. If you want to refine your order of magnitude estimates, you can progressively provide quick estimates for the remaining tasks based on your up-to-date knowledge of the system.
This should provide you with a fairly accurate short term schedule, and a fuzzier long term schedule that will progressively get more accurate.
Finally, if you are facing a long development cycle, I would recommend you identify certain target goals or dates, and when you meet those goals, sit down and repeat this whole process. I would never go longer than 2 weeks without revisiting these things. New items will get added, others will get overtaken and become obsolete, and others will become higher risk as you better understand the problem. All of this must be taken into account.