Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
As of now I have done effort estimation based on experience and recently using function points.
I am now exploring UCP, read this article http://www.codeproject.com/KB/architecture/usecasep.aspx. I then checked various other articles based on Use Case Points (UCP). I am not able to find out how exactly it works and is it correct.
For example, I have a login functionality where user provides userid and password and I check against a table in database to allow or deny login. I define a user actor and Login as a Use Case.
As per UCP I categorize Login use case as Simple and the GUI interface as Complex. As per UCP factor table I get 5 and 3 so the total is 15. After applying the technical factor and environmental factor adjustment it becomes 7. If I take productivity factor as 20 then I am getting 140 hours. But I know it will take at most 30 hrs along with documentation and testing efforts.
Am I doing something wrong in defining the Use Case here? UCP says if the interface is GUI then its complex but here the gui is easy enough so should I downgrade that factor? Also factor for simple is 5, should I define another level as Very Simple? But then am I not complicating the matter here?
Ironically, the prototypical two box logon form is much more complicated than a 2 box CRUD form because the logon form needs to be secure and the CRUD form only needs to save to a database table (and read and update and delete).
A logon form needs to decide if where to redirect to, how to cryptographically secure an authentication token, if and how to cache roles, how to or if to deal with dictionary attacks.
I don't know what this converts to in UCP points, I just know that the logon screen in my app has consumed much more time a form with a similar number of buttons and boxes.
Last time I was encouraged to count function points, it was a farce because no one had the time to set up a "function points court" to get rulings on hard to measure things, especially ones that didn't fall neatly into the model that function point counting assumes.
Here's an article talking about Use Case Points - via Normalized Use Case. I think the one factor overlooked in your approach is the productivity which is suppose to be based on past projects. 20 seems to be the average HOWEVER if you are VERY productive (there's a known 10 to 1 ratio of moderate to good programmers) the productivity could be 5 bringing the UCP est. close to what you think it should be. I would suggest looking at past projects, calculating the UCP, getting the total hours and determining what your productivity really is. Productivity being a key factor needs to be calculated for individuals and teams to be able to be used in the estimation effectively.
Part of the issue may be how you're counting transactions. According to the author of UCP, transactions are a "round trip" from the user to the system back to the user; a transaction is finished when the system awaits a new input stimulus. In this case, unless the system is responding...a logon is probably just 1 transaction unless there are several round trips to and from the system.
Check out this link for more info...
http://www.ibm.com/developerworks/rational/library/edge/09/mar09/collaris_dekker/index.html
first note that in a previous work of Ribu he stated that effort for 1 UCP ranges from 15 to 30 hrs (see: http://thoughtoogle-en.blogspot.com/2011/08/software-quotation.html for some details);
second it is clear that this kind of estimation like, also Function Points, is more accurate when there are a lot of use-case and not one. You are not considering for example, startup of the project, project management, creation of environments etc. that are all packed in the 20 hours.
I think there is something wrong in your computation: "I get 5 and 3 so the total is 15". UAW and UUCW must be added, not multiplied.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
I'm working in a team that's been consistently and fairly successfully working in an agile approach, and this has been working great for the current project until now, for our initial work, as we incrementally build the product.
We're now moving into the next phase of this though, and the management are keen for us to set some specific deadlines ourselves, for when we'll be in a position to demo and sell this to real customers, on the order of months.
We have a fairly well organised large backlog for each of the elements of functionality we'd like to include, and a good sense of the prioritisation of these individual bits of functionality.
The naive solution is to get the minimum list of stories that would provide a demo-able product, estimate all of those individually, and add them up and combine with our velocity to get a date, and announce we'll be demoing from then. That leaves no leeway though, and seems likely to result in a mad crunch as we get up to deadline time, which I desperately want to avoid.
As an improvement, I'd like to add in some ratio of more optional stories to act as either contingency or bonus improvements, depending on how we progress, but we don't have any idea what ratio would be sensible, or whether this is the standard approach.
I'm also concerned by having to estimate the whole of our backlog all in one go up-front, as that seems very time consuming, and it seems likely that we'll discover more information in the months before we get to that story, which will affect our estimates.
Are there recommended approaches to dealing with setting deadlines to allow for an agile development process? Most of the information I've seen seems to be around handling the situation once you've got a fixed deadline to hit instead. I'd also be interested in any relevant literature or interesting blog posts that cover this issue.
Regarding literature: the best book I know regarding the estimation in software is "Software Estimation: Demystifying the Black Art" by Steve McConnel. It covers your case. Plus, it describes the difference between estimation and commitment (set-deadline, in other words) and explains how to derive the second from the first reliably.
The naive solution is to get the minimum list of stories that would
provide a demo-able product, estimate all of those individually, and
add them up and combine with our velocity to get a date, and announce
we'll be demoing from then. That leaves no leeway though, and seems
likely to result in a mad crunch as we get up to deadline time, which
I desperately want to avoid.
This is the solution I have used in the past. Your initial estimate is going to be off a bit so add some slack via a couple of additional sprints before setting your release date. If you get behind you can make it up in the slack. If not, your product backlog gives you additional features that you can include in the release if you so choose. This will be dependent on your velocity metric for the team though. Adjust your slack based on how accurate you feel this metric is for the current team. Once you have a target release you can circle back to see if you have any known resource constraints that might affect that release.
The approach you describe is likely to be correct. You may want to estimate for all desirable features, and prioritise UI elements (because investors and customers basically just see the shiny UI), and then your deadline will be that estimated date for completion; then add on some slack in the form of scaling your estimates. Use the ratio between current productivity and your worst period to create a pessimistic estimate. You can use that same ratio to scale shorter estimates (e.g. for your estimate to the minimum feature set).
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
We have a few products with one of the product use flat files for persistence.. Other products in the suite can use that data (via API) but only one at a time..
We cannot put the whole files in DB as its huge data.. 20GB+.. but still we have found a solution where some data can be put in DB.. e.g. user interpretations, meta info, markups etc..
So the story is like:
"As a user i can concurrently access product A data from product B, C and D". That is huge i.e. approx 6-8 months
Even if I keep it as "As a user i can concurrently access product A data from product B". It’s still huge.. i.e. approx 5-6 months
Even doing like following, It’s still huge..
"As a user i can concurrently access feature X of product A data from product B". i.e. approx 4-5 months.
The problem is if we can do one thing (one feature, one product) we can quickly do all..
how can i break this story into sub-stories.. or should i accept that some stories cannot be further broken into sub-stories that can fit in one iteration.
PS: we use scrum
Ask yourself (and your team): What makes the story so big? Is there absolutely no benefit that can be shown along the way? Features and products would be the obvious cut, but might not necessarily (as you've shown) be good enough.
How about sub-components of the feature? What are you putting in? Is any of it externally visible or valuable?
Do you have authentication, configuration, or other "standard" aspects of the product? You could cut those out and put them as user stories.
Perhaps the 3-5 month features can be cut down further?
Anyway,
I hope this helps,
Assaf.
What you are describing is what we call an "epic" - it's really a collection of smaller stories that you are describing with a much larger descriptor. I suggest you do some more analysis to determine what parts of the system will be impacted by your request. You might have groupings like Reports, Entry Forms, etc that are individually impacted by the request.
Tackle the impact of the "epic" request on each area as a user story. For example, "Enhance Report X to include data from Product B", "Enhance Report X to include data from Product C", etc. I don't know enough about what you are changing to make the titles more descriptive but hopefully you get the idea. Keep at this deconstruction until the stories get down to the sweet spot of 2, 3, or 5 points each.
The nice thing about this is that it also will allow the PO to make a decision once they see all of the costs for this request. They may decide that we really only need access to data from Product B alone to be successful once they see the costs to include Product C also.
Agile fully supports that some features have a longer horizons than a typical sprint period (2-4 weeks). Certainly the story can be broken down into tasks. In this case, I recommend prioritizing the tasks for this story and burning them down using your scrum methodology. At the end of each sprint, you should still have 'working software' that you can demonstrate / test. You may not have the full feature yet, and that is okay.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
Is it better to describe improvements using percentages or just the differences in the numbers? For example if you improved the performance of a critical ETL SQL Query from 4000 msecs to 312 msecs how would you present it as an 'Accomplishment' on a performance review?
In currency. Money is the most effective medium for communicating value, which is what you're trying to use the performance review to demonstrate.
Person hours saved, (very roughly) estimated value of $NEW_THING_THE_COMPANY_CAN_DO_AS_RESULT, future hardware upgrades averted, etc.
You get the nice bonus that you show that you're sensitive to the company's financial position; a geek who can align himself with what the company is really about.
Take potato
Drench Potato in Lighter Fluid
Light potato on fire
Hand potato to boss
Make boss hold it for 4 seconds.
Ask boss how long those 4 seconds felt
Ask boss how much better half a second would have been
Bask in glory
It is always better to measure relative improvement.
So, if you brought it down to 312ms from 4000ms then it is an improvement of 3688ms, which is 92.2% of the original speed. So, you reduced the runtime by 92.2%. In other words, you brought the runtime down to only 7.8% of what it was originally.
Absolute numbers, on the other hand, usually are not that good since they are not comparable. (If your original runtime was 4,000,000ms then an improvement of 3688ms isn't that great.)
See this link for some nice chart suggestions.
Comparison to Requirements
If I have requirements (response time, throughput), I like to color code the absolute numbers like so:
Green: <= 80% of the requirement (response time); >= 120% of > the requirement (throughput)
No formatting: Meets the requirement.
Red: Does not meet the requirement.
Comparisons are interesting, but only if we have enough to see trends over time; Is our performance steadily improving or degrading? Ultimately, the business only cares if we're meeting the requirement. It's only when we don't that they ask for comparisons to previous releases.
Comparison of Benchmarks
If I'm comparing benchmarks to some baseline, then I like to use percentages, but only if the benchmark is a statistically significant change from the baseline.
Hardware Sizing
If I'm doing hardware sizing or capacity planning, then I like to express the performance as the absolute number plus the cost per transaction. For example:
System A: 1,000 transactions/second, $0.02/transaction
System B: 1,500 transactions/second, $0.04/transaction
Use whichever appears most impressive given the change. According to one method of calculation, that change sped up the query by 1,300%, which looks more impressive than 13x improvement, or
============= <-- old query
= <-- new query
Although the graph isn't a bad method.
If you can calculate the improvement in money, then go for that. One piece of software I wrote many years ago saved a few engineers a little bit of time each day. Figuring out the cost of salary, benefits, overhead and it turned into a savings of more than $12k per year for a small company.
-Adam
Rule of the thumb: Whichever sounds more impressive.
If you went from 10 tasks done in a period to 12, you could say you improved the performance by 20%
Saying you did two tasks more doesnt seem that impressive.
In your case, both numbers sound good, but try different representations and see what you get!
Sometimes graphics help a lot of the improvement is there on a number of factors, but the combined somehow does not look that cool
Example: You have 5 params A, B, C, D, E. You could make a bar chart with those 5 params and "before and after" values side by side for each param. That sure will look impressive.
God im starting to sound like my friend from marketing!
runs away screaming
you can make numbers and graphs say anything you want - the important thing is to make them say something meaningful and relevant to the audience you're presenting them to. if it's end users you can show them differences in the screen refreshes (something they understand), to managers perhaps the reduced number of servers they'll need in order to support the application ($ savings), financial...it's all about the $ how much did it save them. a general rule is the less technical the group the more graphical and dramatic you need to be.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
It took me some time, but I've finally managed to write down all the tasks that need to go into Version 1.0 of the software product I'm working on.
The list is almost 1000 items long.
We are a 3-person team, and we've somehow managed to get this far using MindMeister, Google Docs, #todos in the code etc. Now, I have everything neatly grouped by feature, but how do I prioritize all this and turn it into a schedule?
Any advice would be greatly appreciated - I'm not looking for software recommendations, however - I'm seeking advice on how to take this enormous bag of tasks - ranging from bug-fixes to application modules - and find out in what order I should do them.
Prioritize ruthlessly. 1000 action items is a lot, and the odds are that as you go you'll modify some, toss others, and add new ones. Your list will not survive the things you learn by actually building the software, and if you don't do the most important stuff first, you'll end up with a mess.
For every item or feature, you have to answer the question: Can the product be at all usable or useful without this? If yes, it can wait; everything else goes to the head of the queue.
After that, I like to group milestones by focus: I'll do a features milestone (or multiple ones if there are natural small clusters of features), a UI milestone where I'll focus on AJAX/rich client interactivity, a performance milestone where I profile and do database & server tuning, etc. Or break them up some other way - but definitely break them up. Work in smaller bites with specific focus for each iteration, and make sure each iteration is solid before moving on.
My recommended approach will be based on Agile methodology best practices...
So, you have what in Agile terms is called a "backlog" defined- that's great - and an important first step.
A good Agile pace that is commonly used is a 2-3 week iteration length...and at the end you have a set of releasable features. This will establish the "heartbeat" of your development process. Next, you'll decided how to organize and group the features into Stories and Tasks.
You'll want to grow the underlying architecture and let it naturally emerge based on the ordering of the Stories and Tasks that you select from your backlog.
Its important to mitigate risks early - so you'll want to select early those items that are either performance or implementation unknowns that might pose the largest risk - and could result in the largest rework impact. For example - establishing the messaging infrastructure - might be an early architectural feature that might be included if you select a Story that required a persistent message to be delivered to complete a unit of work.
Can you group the set of features into functional categories that might naturally evolve to describe the 1.0 release as a System of Systems? For example, the Administrative functions, the User Profile Management, Reporting, external integration layers, Database Access Objects, etc.
What are the simplest Story / Use Cases that you can write - that will map to some of the ~1,000 features / requirements you've defined? Select a set of Stories (or individual Tasks from a Story - if the Story itself is too large to implement in a single interation). It will take some additional effort - but recomposing your requirements into a set of Stories/Tasks is important.
You'll find that you will refactor during subsequent interations - but that your steady 2-week heartbeat iteration schedule will keep delivering real functionality.
At various points you may want to schedule an architecture iteration just to focus on some cleaning-up / refactoring - and that's ok too.
Since you are indicating that all these items are required, I will assume that there is not much chance of dropping items off the list (at least for now). Given that, you have 2 large tasks at hand - deciding when to do items, and determining how long it will take to do them.
Since you have already conveniently grouped the items by feature, I would start by prioritizing the features. Hopefully this will significantly reduce your working set, and allow you to actually get through it in a reasonable amount of time.
I would prioritize each feature based on its risk. Some things are easy to implement and others are difficult. Since they are all required, do the riskiest features first, when your schedule is more flexible to meet any unanticipated problems. Wait until the end of your cycle, and Murphy's law will strike you down.
Given your small team, I would just send the list of features around and ask everyone to mark it if they consider it a risky or difficult feature to implement. Add up all the marks and you have your "risk assessment", with the highest scoring items getting assigned first.
Alternatively, if you have easy access to your customer, ask them to rate the "risk" associated with each feature (in this case risk refers to the worst-case scenario of not having the feature - if not having something would be annoying, it is not risky. If not having the feature would result in them not using your product, it is high-risk).
Now that you have a priority queue, it is time to estimate. For the initial estimates, I would simply do an order of magnitude estimate for each of the features. Since it sounds as if you have already broken the features up, you should be able to get a decent feel for whether something is going to take hours, days or weeks. From the sounds of it, you are still early in development, so I don't believe there is much point in trying to get an accurate estimate on something that won't be implemented for another month or so.
As you pull items off your queue, have your team provide more accurate estimates by identifying granular tasks that shouldn't take more than a few hours. If you want to refine your order of magnitude estimates, you can progressively provide quick estimates for the remaining tasks based on your up-to-date knowledge of the system.
This should provide you with a fairly accurate short term schedule, and a fuzzier long term schedule that will progressively get more accurate.
Finally, if you are facing a long development cycle, I would recommend you identify certain target goals or dates, and when you meet those goals, sit down and repeat this whole process. I would never go longer than 2 weeks without revisiting these things. New items will get added, others will get overtaken and become obsolete, and others will become higher risk as you better understand the problem. All of this must be taken into account.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
During our iteration planning, we frequently find ourselves in the same position as this guy - How to estimate a programming task if you have no experience in it
I definitely agree with prototyping before you can give a reasonable estimate. But the same applies to anything that needs a bit of architecture and design - but I'm not that comfortable doing all this outwith the scope of a sprint.
The basic idea is that you identify as many tasks as you can that you're confident of, and estimate these as normal. For those areas that you're unsure of then there should be two 'types' of task identified: Investigation & Implementation.
Investigation tasks are brief descriptions of work that you're just unsure of, for example "Investigate how to bind Control X to data". An estimate is provided for these.
The Implementation task is a traditional rough guess, probably based on the story points assigned, of how long you think it would take to implement the feature.
During the sprint, when the investigation tasks have been completed, the developer should then be at a stage where they have a much better idea what is going on. 'Proper' Tasks can then be identified, which take the place of the Implementation placeholder. In addition, further Investigation tasks may be identified at this stage, and the cycle continues.
In the above example, we start with an Investigation task at 7 hours and an Implementation task estimated at 14. Once the first Investigation has been completed, Tasks 1, 2 and 3 will be identified and estimated with some degree of certainty, where Task 3 is another Investigation task from which Task 4 and 5 will be identified at a later stage. As you can see, the first Implementation estimate had delivery of the feature within 14 hours - but the reality is it took at least 4 + 7 + 3 + 4 + 2 = 20. A third more than the initial estimate.
alt text http://www.duncangunn.me.uk/myweb/images/estimate.png
All thoughts are welcome - my gut instinct is this will fly - am I right or am I the Wrong Brothers?
Cheers!
What we do.
Some features involve new technology. We can't accurately estimate them. Period.
We make up a number. Based on a couple of things. How hard does it "feel"? Can we get by with some kind of "partial" or "just-enough" implementation?
If it's hard, then it's hard. It will be expensive.
If there's a lot of parts, with a kernel of goodness and some bonus stuff layered on, we have a possibility of putting just the kernel into a release, and setting other stuff aside for later. A very few things are "all or nothing" where a partial release isn't possible. In that case, we have to provide enough time for "all", and that gets expensive.
Our standard approach is to get stuff that works, and possibly defer things to a later sprint if we ran of out time because of unexpected complexities.
What you're calling "investigation", we call technical spike sprints. For stuff that's new, we make up estimate number to placate managers who feel it necessary to overplan things. Then we spike the technology. Once it's spiked, we can revise the estimates based on what we now know.
Actually, the implementation of the feature took 27 hours - you forgot the first investigation of 7 hours, so in reality the actual implementation took almost twice as long as the estimate.
There are two ways you can go on this:
Just make the estimate as best you can and potentially experience a blowout in your sprint and a declined project velocity (you should only do this if the feature is both urgent and critical); or
Schedule the investigation for this sprint and leave the implementation for another sprint - without an idea of how long the task will take, the Product Owner does not have enough information to make a decision about in which sprint to schedule it or even whether to do it at all. Only tasks that have been estimated should be included in your sprint.
The first choice means your sprint and project estimates are somewhat arbitrary. The second choice gives much more predictability to your sprints.
In your example, the initial investigation may be scheduled for Sprint 1 but without knowledge of how long the task will take the Product Owner can't decide how to schedule it. If you came back with an estimate of 200 hours the Product Owner may decide not to do that feature at all, or to delay it until Release 2 of the product. The estimate comes in and the Product Owner schedules Task 1, Task 2 and the investigation of Task 3 for Sprint 2. After estimating Task 3, Tasks 4 and 5 can be scheduled in Sprint 3 or later.
Estimating feature usually is complex task. After some time your estimation will become better. But good approach can be that you estimate features with the story points. Story point is abstract value (meaning agreed among the team) that express complexity of the problem.
You should assign the same complexity (same number of story points) to the features of the similar complexity. Then later on it is enough to estimate only smaller set of features (or looking at the historical data) and you should be able to estimate how much time you need.
Features with the similar complexity need similar time effort for implementation.