Project management evaluate quality and productivity [closed] - project-management

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I was promoted as a project manager. This is a new role for me and at the beginning I find this job pretty hard.
Can you give me some example of metrics that I could use to measure the quality and productivity of my team members? I need to measure that for developers and for testers.

You can measure a lot of things
lines of code produced.
keystrokes.
coffee consumed.
trash produced.
However, that's just numerosity -- numbers for the sake of having numbers.
Before you measure your team, find out how you are measured.
Find out how your overall development organization is measured.
Find out what measurements the overall organization is trying to optimize.
Then -- and only then -- try to find a way to map the big picture goals down to your team. If you're not measuring progress toward the organization's overall goals, then you're just collecting numbers.
If you're going to collect numbers, weighing the coffee every morning may be the best indicator of work being done. Seriously.

This is a good link suggesting most metrics you choose will probably not end up working in the way you want. Developer Metrics - Useful or Harmful?
For example,
measure lines of code, the impact will be verbose code
Any metric you choose will likely end up being 'gamed' by the people you are trying to measure.

Value delivered. Work with your customers (product owner) to find a way to measure value in your organization and measure that value. Deliver it frequently (daily, weekly, monthly) and you'll be just fine.

Can you give me some example of metrics that I could use to measure the quality and productivity of my team members.
A few examples of metrics for an Agile Team to measure PRODUCTIVITY:
Story Points / Velocity Points: This is a relatively sizable unit which can be used to measure complexity of the work to be done.
Sprint Burndown: This can be used to measure progress in hours during an iteration or Sprint
Release Burndown: This can be used to measure progress in story points during a release.
A few examples of metrics for an Agile Team to measure QUALITY:
Hudson CI: You could use a continuous integration tool which constantly keeps an eye on the code base for you automatically. Some plugins for quality checks that can be used could be:
Corbertura
PMD
CheckStyle

Find out why your company was interested in adopting Scrum in the first place.
If it was to produce better quality code, then you could focus on things like test coverage, bug count, etc. I like the CRAP index.
Many companies adopt Scrum because most of their problems - including low quality code, tight deadlines and rework - are caused because they produce the wrong code in the first place. Either requirements are misunderstood, or the stakeholders don't quite know what they want. If that's your problem, you might want to measure throughput (how long it takes a story, on average, to go from analysis through to release) or feedback (how long it takes you to know whether the work you did is actually usable, or not - bearing in mind that when it's not usable, you want to know as soon as possible).
I try to avoid measuring things like productivity. It's very easy to be productive without being effective. Focus on the Goal. Most of the metrics in Kanban can be used alongside Scrum and will help here. I very much like Cumulative Flow Diagrams, because they show all kinds of feedback loops and constraints in the system.
Oh, whatever you do - measure the team, not individuals. As soon as people think they are being measured as individuals, they'll stop playing nice with the team.

Productivity is the only important measure of success: did my team accomplish what we said we could do, within the timeframes specified, and to a high level of quality (passing UAT)? If not, why not?
A good way to "measure" this is by measuring the team's velocity and then using that as a benchmark. Velocity represents both what the team can do as well as how accurately they are able to plan their own level of effort.
Here are a few good articles on velocity and how to calculate it:
http://www.versionone.com/Agile101/velocity.asp
http://michaellant.com/2010/07/23/calculating-the-velocity-of-your-agile-projects/

When I was first promoted to PM, a decade ago, I tried to track everything. Only one thing really matters in the end and that is the team's happiness (the team includes the client). If everyone is content with the pace, quality, and environment of the project then you only need to adapt whatever metrics help the team to improve. You can identify those metrics with the team in retrospectives. Whether they find velocity, functional points, lead and cycle time, test coverage, or any other metrics useful - those are the things you should help them to measure. I think the most powerful thing a new PM can focus on is communication, not measurement.

Related

Assessment of a project manager's volume of work - what is a good methodology? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
Currently, my company utilizes agile as its development principal. I was approached by my boss to determine some methodology for determining the amount of work a project manger does on a given project in flight. To be honest, I can't really think of anything fool proof.
I guess the best question is how do we assess how busy, on a day to day basis, a project manager is?
Remember that ANY metrics you can come up with is most likely going to be gamed.
[ Do I get a badge for on-topic link to Joel On Software? :) ]
Having said that, you can try a union of the following approaches:
Developer feedback!!! (e.g. a good PM's feedback would be "I had problems X, Y and Z and he made them disappear"). Not so good for measuring how "busy" a PM is but really good for measuring how effective he/she is.
Volume and rated clarity of project plans (easily gamed)
Rate of change of project plans (easily gamed)
Amount of meetings/meeting time (easily gamed)
Success rates of projects (on timeliness vs. % of features delivered vs. customer satisfaction). Not easily gamed but devil's own work to normalize this across projects.
Timesheets will measure the amount of work in one sense (you can see how their day breaks down and so on) but not I think in the sense you want.
Ultimately I don't believe there is a useful metric for Project Managers in this sense, but I don't think that's an issue.
I think ultimately you should measure project success rather than "busy-ness". After all, why do you care how busy the PM is if they deliver successful projects?
One PM may spend half a day putting together a risk log and mitigation plan which contains 20 risks, another may spend 2 days putting together one which only has 5 risks but none of those numbers are any more useful as a metric than lines of code. The key thing is not how long you spent doing it, how many risks you identified, how big your mitigation plans are, but whether you actually managed risk on the project successfully.
You're better off looking at what a Project Manager is meant to do, which is to deliver projects on-time, to budget and to customer satisfaction (which I'd use as the ultimate measure of quality rather than defects).
After all, do you measure how "busy" the CEO is? Or is he just judged on the profit the company makes?
To do this:
Time - The only way it can really be gamed is by massively padding estimates and plans and this can be minimised by reviewing the plans and estimates and having all relevant parties agree them (developers, PM, client). The other side of this is that the PM must agree to the plan rather than have the implementation date foisted on him or her. You might want to measure this on either the overall implementation or each milestone.
Budget - Measurable but gameble. For most development projects the key thing her is honest timesheets from the developers and the best way to ensure this is to make it so the PM is the PM but not their line manager. That way the developers have someone to fight their corner (a technical director for instance) if they're being pressured to fill in timesheets to keep the budget down. Again the PM should agree the budget, it's not reasonable to expect him to deliver on something he's told you is unreasonable.
Customer satisfaction - Hard to measure so I'd suggest that you keep it simple and go with a straight forward post project review with the account manager and marks out of 10 for communication, delivery and whatever else is important. It is subjective but ultimately so is customer satisfaction.
But a lot of it depends on the company culture. For some organisations the key thing will be billable hours, others developer satisfaction will be part of the mix.
I am trying to understand WHY you have been asked to estimate the amount of work that a project manager does on a project.
At best it is just a request for a rule of thumb, otherwise it indicates that your boss just don't know the first thing about running a project. Even when projects looks very similar there will always be something unique about a project:
The team is not identical (teaching
the new guy the ropes takes time)
The spec might vary just a tiny bit
(and that tiny bit might double the
workload)
Even the season might influence the
outcome
and so on and so forth
Each and every condition on the project might change the workload of the project manager, so it will always be a subjective assessment.
I would suggest you use the same Burn Down and Level of Effort that you use for the developers. A PM's task in an Agile environment is a bit different (and from shop to shop it's different), but the PM should be able to provide a list of tasks, etc. I'm thinking positive and seeing it as your bosses approach to determining how much availability the PM has.
Most project managers equate responsibility with status, so a project manager who has spare capacity is quite likely to volunteer to take on a new responsibility, because it's in his/her own best interest. In all but the most functional organizations it's often better to be visibly overloaded, for that heroic look.
It's more likely to be in the organization's best interest to slightly under load its project managers, so that there is some spare capacity available should something start to go wrong. A project manager might well choose to apply his/her spare capacity in some constructive way in any case. Excessive politicking or other unconstructive activity is a good indicator of someone who could be more constructively deployed. Even on agile projects, workload tends to be uneven across a project cycle - e.g. delivery is often a management-intensive activity - somebody who is continuously heavily loaded probably has too much to do, and may be ignoring or hiding a serious problem.
If the next level of management conducts regular project reviews, pays attention to how many problems are being escalated, whether the project reports correlate with the news from the grapevine, and does some basic estimation on workload projections for each project manager, then the organization should be able to run a reasonably efficient system.
Managers tend to be political and psychological animals. Any methodology that doesn't take that into account is ignoring reality, so a good methodology for this problem is likely to be based more on observed behaviors than on hard numbers.
Excuse me if I am being to purist, but the tag and the question calls for Agile. What would be a project manager in Agile? You might either be trying to asses the work being done by a product owner or a scrum master?
In any case, both roles perform several tasks that are hard to measure, so probably your boss is looking at the wrong picture.
For instance, a scrum master is "The person responsible for supporting the development team, clearing organizational roadblocks, and keeping the agile process consistent". Basically is a coach and a facilitator. Blocking disruptive requests or distractions created by higher levels of management by negotiation or persuasion to follow scrum practices is one of the skills commonly used by scrum masters. Several of these soft skills are hard to measure as "work" since they do not involve working on a computer or producing a report.
I think a the metric that your boss would benefit most from is more related on how effective the team is and how a scrum master is described to facilitate the work of his team-mates. DVK then has a very valid point, the metrics you create can be "gamed", so it is best to trust that your managers are busy if your projects are progressing and your teams are happy and work as a team.

How to measure software development performance? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I am looking after some ways to measure the performance of a software development team. Is it a good idea to use the build tool? We use Hudson as an automatic build tool. I wonder if I can take the information from Hudson reports and obtain from it the progress of each of the programmers.
The main problem with performance metrics like this, is that humans are VERY good at gaming any system that measures their own performance to maximize that exact performance metric - usually at the expense of something else that is valuable.
Lets say we do use the hudson build to gather stats on programmer output. What could you look for, and what would be the unintended side effects of measuring that once programmers are clued onto it?
Lines of code (developers just churn out mountains of boilerplate code, and other needless overengineering, or simply just inline every damn method)
Unit test failures (don't write any unit tests, then they won't fail)
Unit test coverage (write weak tests that exercise the code, but don't really test it properly)
Number of bugs found in their code (don't do any coding, then you won't get bugs)
Number of bugs fixed (choose the easy/trivial bugs to work on)
Actual time to finish a task based against their own estimate (estimate higher to give more room)
And it goes on.
The point is, no matter what you measure, humans (not just programmers) get very good at optimizing to meet exactly that thing.
So how should you look at the performance of your developers? Well, that's hard. And it involves human managers, who are good at understanding people (and the BS they pull), and can look at each person subjectively in the context of who/where/what they are to figure out if they are doing a good job or not.
What you do once you've figured out who is/isn't performing is a whole different question though.
(I can't take credit for this line of thinking. It's originally from Joel Spolsky. Here and here)
Do NOT measure the performance of each individual programmer simply using the build tool. You can measure the team as a whole, sure, or you can certainly measure the progress of each programmer, but you cannot measure their performance with such a tool. Some modules are more complicated than others, some programmers are tasked with other projects, etc. It's not a recommended way of doing this, and it will encourage programmers to write sloppy code so that it looks like they did the most work.
No.
Metrics like that are doomed to failure. Different people work on different parts of the code, on different classes of problem, and absolute measurements are misleading at best.
The way to measure developer performance is to have excellent managers that do their job well, have good specs that accurately reflect requirements, and track everyone's progress carefully against those specs.
It's hard to do right. A software solution won't work.
I think this needs a very careful approach when deciding the ways to measure developers performance as most the traditional methods such as line of codes, number of check ins, number of bugs fixed etc. are proven to be subjective with todays software engineering concepts. We need to value team performance approach rather measuring individual KPIs in a project. However working in commercial development environment its important to keep a track and a close look at following factors of individual developers;
Code review comments – Each project, we can decide the number of code reviews need to be conducted for a given period. Based on the code reviews individuals get remarks about their coding standard improvements. Recurring issues of code reviews of same individual’s code needs to be brought in to attention. You can use automated code reviews tools or manual code reviews.
Test coverage and completeness of tests. – The % covered needs to be decided upfront and if certain developer fails to attempt it often, it needs to be taken care of.
Willingness to sign in to complex tasks and deliver them without much struggle
Achieving what’s defined as “Done” in a user story
Mastery level of each technical area.
With agile approach in some of the projects, the measurements of the development team and the expected performance are decided based on the releases. At each release planning there are different ‘contracts’ negotiated with the team members for the expected performance. I find this approach is more successful as there is no reason of adhering to UI related measurements in a release where there is a complex algorithm to be released.
I would NOT recommend using build tool information as a way to measure the performance / progress of software developers. Some of the confounding problems: possibly one task is considerably harder than another; possibly one task is much more involved in "design space" than "implementation space"; possibly (probably) the more efficient solution is the better solution, but that better solution contributes less lines of code than a terribly inefficient one which provides many many more lines of code; etc.
Speaking of KPI in software developers. www.smartKPIs.com may be a good resource for you. It contains a user friendly library of well-documented performance measures. At the moment it lists over 3300 KPI examples, grouped in 73 functional areas, as well as 83 industries and sub-categories.
KPI examples for the software developers are available on this page www.smartKPIs.com - application development They include but not limited to:
Defects removal efficiency
Data redundancy
In addition to examples of performance measures, www.smartKPIs.com also contains a catalogue of performance reports that illustrate the use of KPIs in practice.
Examples of such reports for information technology are available on: www.smartKPIs.com - KPIs in practice - information technology
The website is updated daily with new content, so check it from time to time for additional content.
Please note that while examples of performance measures are useful to inform decisions, each performance measure needs to be selected and customized based on the objectives and priorities of each organisation.
You would probably do better measuring how well your team tracks to schedules. If a team member (or entire team) is consistantly late, you will need to work with them to improve performance.
Don't short-cut or look for quick and easy ways to measure performance/progress of developers. There are many many factors that affect the output of a developer. I've seen alot of people try various metrics ...
Lines of code produced - encourages developers to churn out inefficient garbage
Complexity measures - encourages over analysis and refactoring
Number of bugs produced - encourages people to seek out really simple tasks and to hate your testers
... the list goes on.
When reviewing a developer you really need to look at how good their work is and define "good" in the context of what the comany needs and what situations/positions the company has put that indivual in. Progress should be evaluated with equal consideration and thought.
There are many different ways of doing this. Entire books written on the subject. You could use reports from Hudson but I think that would lead to misinformation and provide crude results. Really you need to have task tracking methodology.
Check how many lines of the codes each has written.
Then fire the bottom 70%.. NO 90%!... EVERY DAY!
(for the folks that aren't sure, YES, I am joking. Serious answer here)
We get 360 feedback from everyone on the team. If all your team members think you are crap, then you probably are.
There is a common mistake that many businesses make when setting up their release management tool. The Salesforce release management toolkit is one of the best ones available in the market today, but if you do not follow the vital steps of setting it up, you will definitely have some very bad results. You will get to use it but not to its full capacity. Establishing release management processes in isolation from the business processes is one of the worst mistakes to make. Release management tools go hand in hand with the enterprise strategy, objectives, governance, change management plus some other aspects. The processes of release management need to be formed in such a way that everyone in the business is on the same page.
Goals of release management
The main goal of release management is to have a consistent set of reliable and repeatable processes that are resource independent. This enables the achievement of the most favorable business value while at the same time optimizing the utilization of resources available. Considering that most organizations focus on running short, high-yield business projects, it is essential for optimization of the delivery value chain of the application to make certain that there are no holdups in the delivery of the business value.
Take for instance the force.com migration toolkit, as this tool has proven to be great in governance. A release management tool should allow for optimal visibility and accountability in governance.
Processes and release cycles
The release management processes must be consistent for the whole business. It is necessary to have streamlined and standardized processes across the various tool users. This is because they will be using the same platform and resources that enable efficient completion of their tasks. Having different processes for different divisions of your business can lead to grievous failures in tool management. The different sets of users will need to have visibility into what the others are doing. As aforementioned, visibility is of great importance in any business process.
When it comes to the release cycles, it is also imperative to have one centralized system that will track all the requirements of the different sets of users. It is also necessary to have this system centralized so that software development teams get insight into the features and changes requested by the business. Requests have to become priorities to make sure that the business gets to enjoy maximum benefit. Having a steering team is important because it is involved in the reviewing of business requirements plus also prioritizing the most appropriate changes that the business needs to make.
The changes that should happen to the Salesforce system can be very tricky and therefore having a regular meet up between the business and IT is good. This will help to determine the best changes to make to the system that will benefit the business. By considering the cost and value of implementing a feature, the steering committee has the task of deciding on the most important feature changes to make.
Here also good research http://intersog.com/blog/tech-tips/how-to-manage-millennials-on-software-development-teams
This is an old question but still, something you can do is borrow Velocity from Agile Software Development, where you assign a weight to each task and then you calculate how much "weight" you solve in each sprint (or iteration or whatever DLC you use). Of course this comes in hand with the fact that, like a commenter mentioned before, you need to actively keep track yourself of whether your developers are working or chatting online.
If you know your developers are working responsively, then you can rely on that velocity to give you an estimate of how much work the team can do. If at any iteration this number drops (considerably), then either it was poorly estimated or the team worked less.
Ultimately, the use of KPIs together with velocity can give you per-developer (or per-team) insights on performance.
Typically, directly using metrics for performance measurement is considered a Bad Idea, and one of the easy ways to run a team into the ground.
Now, you can use metrics like % of projects completed on-time, % of churn as code goes toward completion, etc...it's a wide field.
Here's an example:
60% of mission-critical bugs were written by Joe. That's a simple, straightforward metric. Fire Joe, right?
But wait, there's more!
Joe is the Senior Developer. He's the only guy trusted to write ultra-reliable code, every time. He's written about 80% of the mission-critical software, because he's the best.
Metrics are a bad measurement of developers.
I would share my experience and how I learnt a very valuable process on measuring the team performance. I must admit, I have fallen on tracking KPI simply because most of the departments would do the same but not really for the insight until I had the responsibility to evaluate developers performance where after a number of reading I evolved with the following solution.
One every project, I would entertain the team in a discussion on the project requirements and involve them so everyone knows what is to be done. In the same discussion through collaboration we would break the projects in to tasks and weight those tasks. Now previously we would estimate the project completion as 100% where each task has a percentage contribution. Well this did work for a while but was not the best solution. Now we would based the task on weight or points to be exact and use relative measurements to compare the task and differentiate the weights for example. There is a requirement to develop a web form to gather user data.
Task would go about like
1. User Interface - 2 Points
2. Database CRUD - 5 Points
3. Validation - 4 Points
4. Design (css) - 3 Points
With this strategy We can pin point a weekly approximate on how much we have completed and what is pending on the task force. We can also be able to pin point who has performed best.
I must admit that I still face some challenges on this strategy such as not every developer is comfortable on every technology. Somehow some are excited to learn technologies simply because they find 2015 high % of points fall in that section some would do what they can.
Remember, do not track a KPI for their own sake, track it for it's insight.

How to make decisions while choosing a project in an IT company? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
Based on what criteria they choose the projects and what are the things based on which they choose a project...?
Return on investment, if they want to stay in business.
Return on investment is ofcourse the final product. But it takes a number of factors to get there:
Their own expertise: Do we have people with skills needed to do this? Can we hire some?
Available resources: Programmers, Managers, Hardware, Time, Financial resources.
PR: Even if we dont get paid that much, will this project get us more business?
PR: Pay is great, but do we really want to be associated with this client?
Their Mission/Goals: What fields/niche do they want to compete in. Do they want to expand?
Past experiences: We did a project like this, it was horrible. Lets not do that again.
Past experiences: It was fun last time, AND we can reuse half the code! Lets do it!
Usually the management uses more sophisticated matrices and all to make their decision, but more or less, these are the factors they usually put in.
I am sure someone can provide a more specific/scientific answer.
Good question. The straightforward answer may seem to be Return on Investment (ROI). However, ROI is criticised for three reasons:
Short-termism: ROI is seldom calculated beyond 5-7 years (due to increasing discount rate on any cash flows produced in the future), some projects really worth doing realise full benefits much further in the future.
It’s hard or impossible to put monetary value on some things. The often cited example is human life. The other is moral principles. However, most frequently encountered thing in software world what is very hard to put a price on is opportunities that will never emerge unless this project goes live. It’s hard to put a value on the emerging opportunities, because we don’t know what they are until they actually emerge. And I don't mean opportunities that will simply not “open”, but specifically emerge.
ROI doesn’t take into account wider strategy. The importance of strategy in software world should not be underestimated and the strategy should take into the account specifics of providing software products or services. Geoffrey Moore’s “Crossing the Chasm” is a brilliant book I recommend and is very pertinent to the software world.
Joel’s recent instalment “Fruity treats, customization, and supersonics: FogBugz 7 is here” has a great sample of strategy document and the reasoning behind it. It seems that FogCreek plans to leave the bawling valley and enter the tornado (according to Geoffrey Moore’s classification) with their FogBugz 7.0 and hence the strategy of removing barriers that prevent people from switching to FogBugz, instead of spending time to introduce some more vertical features.
Other tools that can be used for selecting projects are SWOT analysis, Pareto analysis (i.e. choosing a project to address 20% of causes that are responsible for 80% of problems), PESTLE, Cost-Benefit analysis (similar to ROI, including the critique).
However, it seems that a sane strategy that states that the company is planning to do and not be doing in the finite period of time (often next year or two, in high tech market conditions are hard to predict beyond that horizon), gives a simple guidelines for choosing priorities and clear direction for joint efforts is the best starting point.
I also recommend reading a fabulous book “Almost Perfect” by Pete Peterson (former CEO of the maker of WordPerfect) that is available online. The book tells a real-life story of different strategies SSI Inc followed, some planned and stated and some ad hoc, and the way they were used to select what to work on.
ROI is only one measure. There are many other factors:
Risk management - for example, improving the process may not show any direct return on investment, but by adding e.g. unit tests the quality of the software can be improved and risk of a production bug reduced.
Compliance - there may be requirements by industry or government that need to be followed. Directly this may not show a return on investment because they may never be audited, but the downside to being non-compliant is huge (being shut down).
Manageability - providing metrics on bugs, project schedules etc. may not show a direct return on investment but it may allow them to better predict and manage their projects.
Security - this may be considered as a part of risk management, but it is a broad enough area to merit its own category. Making legacy code secure can cost a large amount of money and not show any immediate return, but there are obvious reasons why this is worthwhile.

How to create an accurate hour estimate? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
What are your experiences regarding
project planning and creating hour
estimates for new projects?
What is the approach you are using,
and why has or has it not worked
for you?
Are there any best practices to take
into account?
Estimation Tasks
The principles that I try to use (I don't always get the opportunity) are:
Step-wise refinement
3 point estimates
Risk analysis
Step-wise refinement
When estimating it's important to estimate at the right granularity and to continually break down and add tasks until you're confident in the estimates. Quite often, estimating highlights a lengthy, critical path task that may need more refinement and risk analysis.
Risk analysis
Trying to work out where risks lie in each task (are there lead times for something? is there a lack of knowledge? could a competitor beat you to it? etc. etc.) helps to determine your confidence in the estimates, which allows you to determine how to treat those estimates. Risk analysis also helps to determine if further step-wise refinement is required.
3-point estimates
Specifying the best, probable, and worst case estimates for each task (including design, development, testing, and bug-fixing) helps with risk analysis and planning. The estimates can be used to calculate the most likely duration to hit particular percentage success of that task. Together with information on other related tasks, and risk analysis, a project manager can factor the risk, and other known elements like system testing into the estimates to get a more reliable estimate.
Of course, the granularity of estimates is also important. There's no point in estimating in hours for most tasks. In software, days is usually best, but sometimes it could be weeks or months (such as if you are out-sourcing blocks of work). Choose a time granularity that makes sense for all the tasks in a project (I usually use days for the requirements capture and functional specification phases, and half-days thereafter as I learn more about the tasks and their sub-tasks).
Conclusion
All three of these items feed into each other, so quite often you have to refine each step a number of times. For example, you might have a stab at the requirments stage, then again during functional specification, and again during design specification.
Estimation is a learned skill; the more you do, the better you get. Risk analysis improves as you learn more about what you don't know, 3-point estimates improve as you learn more about what you do know, and stepwise refinement improves as you go through each step of a design process.
If you have the time, revisit your original estimates after you've completed a task and see how the actual time stacks up against your 3-point estimates and your project plan. If it differs, see where the time was lost or gained and try to learn what you can take from that for future projects.
Estimation should not be a daunting task - I always feel like I know more about my work after estimation than before.
I highly recommend the book "Software Estimation: Demystifying the Black Art" by Steve McConnell. It really covers this question well.
There is some excellent information about this in The Pragmatic Programmer. They advise that you use appropriate units of time rather than estimating 130 days estimate 6months. They also advise to concentrate tasks that are most crucial. And avoid making estimates based on sub estimates.
Personally I find it is useful to break the task down to understandable chunks to properly estimate them. If the task is large there are too many nooks & crannies that can hide unthought-of problems. By concentrating on the details of smaller chunks, you can evaluate the potential problems more successfully.
Your question is an NP-Complete problem:) There are many algorithms used to come up with an estimate but they are always just guesses, are never accurate, and many take a long time to execute. Forget hour estimates, use scrum or some other agile framework. Making estimates for a project in hours at its start is simply lying to people.
Don't make hour-based estimates until right before you build the feature and update those estimates continually as you progress on the feature.
Don't forget to include time for testing in your estimates.
Practice, practice, practice. To be safe, overestimate as you refine your estimating abilities. Of course if you're a consultant, this can cost you business. If you're afraid of losing business, under estimate, but be aware you'll be making up the extra hours out of your free time/bottom line.
RE:
If you're afraid of losing business, under estimate, but be aware you'll be making up the extra hours out of your free time/bottom line.
you are better off reducing your hourly rate rather then messing with the hours you present to the client. at least this way, you present the appearance of added value to your client.
LM
Log the time spent in your actual projects and that will help you plan to for the next one, PSP/TSP offer a way to do it

How Much Time Should be Allotted for Testing & Bug Fixing [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Every time I have to estimate time for a project (or review someone else's estimate), time is allotted for testing/bug fixing that will be done between the alpha and production releases. I know very well that estimating so far into the future regarding a problem-set of unknown size is not a good recipe for a successful estimate. However for a variety of reasons, a defined number of hours invariably gets assigned at the outset to this segment of work. And the farther off this initial estimate is from the real, final value, the more grief those involved with the debugging will have to take later on when they go "over" the estimate.
So my question is: what is the best strategy you have seen with regards to making estimates like this? A flat percentage of the overall dev estimate? Set number of hours (with the expectation that it will go up)? Something else?
Something else to consider: how would you answer this differently if the client is responsible for testing (as opposed to internal QA) and you have to assign an amount of time for responding to the bugs that they may or may not find (so you need to figure out time estimates for bug fixing but not for testing)
It really depends on a lot of factors. To mention but a few: the development methodology you are using, the amount of testing resource you have, the number of developers available at this stage in the project (many project managers will move people onto something new at the end).
As Rob Rolnick says 1:1 is a good rule of thumb- however in cases where a specification is bad the client may push for "bugs" which are actually badly specified features. I was recently involved in a project which used many releases but more time was spent on bug fixing than actual development due to the terrible specification.
Ensure a good specification/design and your testing/bug fixing time will be reduced because it will be easier for testers to see what and how to test and any clients will have less lee-way to push for extra features.
Maybe I just write buggy code, but I like having a 1:1 ratio between devs and tests. I don't wait until alpha to test, but rather do it throughout the whole project. The logic? Depending on your release schedule, there could be a good deal of time between when development starts and when your alpha, beta, and ship dates are. Furthermore, the earlier you catch bugs, the easier (and cheaper) they are to fix.
A good tester, who find bugs soon after each check-in, is invaluable. (Or, better yet, before a check-in from a PR or DPK) Simply put, I am still extremely familiar with my code, so most bug fixes become super simple. With this approach, I tend to leave roughly 15% of my dev time to bug fixing. At least when I do estimates. So in a 16 week run I'd leave around 2-3 weeks.
Only a good amount of accumulated statistics from previous projects can help you to give precise estimates. If you have a well defined set of requirements, you can make a rough calculation of how many use cases you have. As I said you need to have some statistics for your team. You need to know average bugs-per-loc number to estimate total bugs count. If you don't have such numbers for your team, you can use industry average numbers. After you have estimated LOC (number of use cases * NLOC) and average bugs-per-lines, you can give more or less accurate estimation on time required to release project.
From my practical experience, time spent on bug-fixing is equal to or more (in 99% cases :) ) than time spent on original implementation.
From the testing Bible:
Testing Computer Software
p. 31: "Testing [...] accounts for 45% of initial development of a product." A good rule of thumb is thus to allocate about half of your total effort to testing during initial development.
Use a language with Design-by-Contract or "Code-contracts" (preconditions, check assertions, post-conditions, class-invariants, etc) to get "testing" as close to your classes and class features (methods and properties) as possible. Then use TDD to test your code with its contracts.
Use as much self-built code-generation as you possibly can. Generated code is proven, predictable, easier to debug, and easier/faster to fix than all-hand-coded code. Why write what you can generate? However, do not use OPG (other-peoples-generators)! Code YOU generate is code you control and know.
You can expect to spend an inverting ratio over the course of your project--that is--you will write lots of hand-code and contracts in the start (1:1) of your project. As you see patterns, teach a code generator YOU WRITE to generate the code for you and reuse it. The more you generate, the less you design, write, debug, and test. By the end of the project, you will find that your equation has inverted: You're writing less of your core-code, and your focus shifts to your "leaf-code" (last-mile) or specialized (vs generalized and generated) code.
Finally--get a code analyzer. A good, automated code analysis rule system and engine will save you oodles of time finding "stupid-bugs" because there are well-known gotchas in how people write code in particular languages. In Eiffel, we now have Eiffel Inspector, where we not only use the 90+ rules coming with it, but are learning to write our own rules for our own discovered "gotchas". Such analyzers not only save you in terms of bugs, but enhance your design--even GREEN programmers "get it" rather quickly and stop making rookie mistakes earlier and learn faster!
The rule of thumb for rewriting existing systems is this: "If it took 10 years to write, it will take 10 years to re-write." In our case, using Eiffel, Design-by-Contract, Code Analysis, and Code Generation, we have re-written a 14 year system in 4 years and will fully deliver in 4 1/2. The new system is about 4x to 5x more complex than the old system, so this is saying a lot!

Resources