I'm trying to create some internal metrics to demonstrate (determine?) how well TDD improves defect rates in code.
Is there a better way than defects/KLOC? What about a language's 'functional density'?
Any comments or suggestions would be helpful.
Thanks - Jonathan
You may also consider mapping defect discovery rate and defect resolution rates... how long does it take to find bugs, and once they're found, how long do they take to fix? To my knowledge, TDD is supposed to improve on fix times because it makes defects known earlier... right?
Any measure is an arbitrary comparison of defects to code size; so long as the comparison is similar, it should work. E.g., defects/kloc in C to defects/kloc in C. If you changed languages, it would affect the metric in any case, since the same program in another language might be less defect-prone.
Measuring defects isn't an easy thing. One would like to account for the complexity of the code, but that is incredibly messy and unpleasant. When measuring code quality I recommend:
Measure the current state (what is your defect rate now)
Make a change (peer reviews, training, code guidelines, etc)
Measure the new defect rate (Have things improved?)
Goto 2
If you are going to compare coders make sure you compare coders doing similar work in the same language. Don't compare the coder who works in the deep internals of your most complex calculation engine to the coder who writes the code that stores stuff in the database.
I try to make sure that coders know that the process is being measured not the coders. This helps to improve the quality of the metrics.
I suggest to use the ratio between the times :
the time spend fixing bugs
the time spend writing other codes
This seem valid across languages...
It also works if you only have a rough estimation of some big code base. You can still compare it to the new code you are writing, to impress you management ;-)
I'm skeptical of all LOC-related measurements, not just because of different relative expressiveness of languages, but because individual programmers will vary enough in the expressiveness of their code as to make this metric "fuzzy" at best.
The things I would measure in the interests of project management are:
Number of open defects on the project. There's no single scalar that can tell you where the project is and how close it is to a releasable state, but this is still a handy number to have on hand and watch over time.
Defect detection rate. This is not the rate of introduction of new defects into the system, but it's probably the closest proxy you'll find.
Defect resolution rate. If this is less than the detection rate, you're falling behind - if it's greater, you're getting ahead.
All of these numbers are more useful if you combine them with severity information. A product with 20 minor bugs may well closer to release than one with 2 crashing bugs. If you're clearing the minor bugs but not the severe ones, you have to get the developers to refocus their attention.
I would track these numbers per project and per developer. The reason for doing them per project should be clear. The per-developer numbers are certainly not the whole picture of an individual contributor's skill or productivity, but can point you to people who might need training or remediation.
You may also wish to tag all the tickets in your defect tracking system by project module as well (especially for larger projects), so that you can tell when critical modules are in a fragile state.
Why dont you consider defects per use case ? or defects per requirement. We have faced practical issues in arriving at the KLOC.
Related
I'm working in a team of 2 front-end developers on a web-based late-stage startup project.
The site works quite well, but there's a lot of room for improvement code-wise, as the code is quite messy and disorganized.
I would like to clean things up gradually by writing tests and carefully refactoring to avoid breaking anything. (Using principles from the book 'Working Effectively with Legacy Code')
However, the developer I'm working with is being given a lot of high-priority feature work, and I don't want to burden him with maintenance tasks. A lot of the time he has to write messy code simply because of the time-constraints.
As the team grows I'm concerned about how to manage the different concerns.
I'm thinking of dividing the team into 2 groups:
Does rapid development on new features, with less care on code quality.
Writes unit tests, refactors code, generally optimizes things.
The result I'm aiming for is to bring as much of the code under test as possible, while still keeping up the pace of new-feature development.
Has this been tried before? Any thoughts?
There is a problem with dividing the team.
The group doing the rapid development of new features will be very popular with the sales force, and the management. They will be perceived as more productive, more solution-oriented and they will be included in more of the important discussions.
The group worrying about the quality will be perceived as costly, negative, unproductive, problem-oriented whiners.
The bean-counter reality of it is that the "easy wins" of the rapid group accumulates hidden costs of poor quality. If this goes on too long the code will become more and more messy as new rapid code builds on old rapid code and it will become more and more time-consuming and risky to add more features. Those hidden costs becomes visible when no one can add any new features anymore.
Think two years down the road. Your rapid coders will probably have moved on, cashing in on their many "successes". The sales force will be talking about them, and how easy it was when they were running the shop.
It's like weed in a garden. It's easy to solve the problem early, much harder when it's all weeds.
The engineering reality of it is that rapid coding shouldn't be messy. You don't save time by doing things messy. If it looks that way, there is either a skill problem or a framework problem.
So if you divide the group, rotate after every iteration!
And how do I know this, scenario so well? I came in after the "successful" coders had left...
Hi I think You Can not dive team into two groups.
#Guge
Have covered financial and political impact on members of those groups.
And there are more:
1) Speed group will not care about quality of code.(They wont be paid for it!!) So they will produce ugly code.
2) Quality group will rewrite some code to be better. Speed group will add to this code some new functionality and again code will be ugly.
3) Speed group is about to add new func. to code that was not refactored. They do no refactors (they are not paid for it so management may be anger if they do, or if they work slow down) so they will create ugly^2 code at lower speed than if they refactor first.
4) No new func. mean no $$$ mean smaller wages mean worse developers assigned mean lower speed and quality.
5) Better devs will find a way to change goup to speed or a way to change work :)
5) If quality group will improve any subsytem speed group may deny implementing any patches to existing code to not slow down.
Ok So what to do to prevent such situation?
Talk with your managment, You need to maintaing balance betwen new func. and quality of code. Only good code allow to introduce fast changes or implementing new func. at high speed.
Good argument is that badly written code or architecture that evolved into something ugly is a debt. When you wont pay it monthly interest grow rapidly. You can afford to not paid one month probably but not longer.
You can get you credit card with you to emphasize.
And maybe if your team get too much work to do it in deadline it is always worth to speak it loudly. But carefully ;)
And all programmers should refactor and write testes (if relevant). No exception.
Just refactor a bit today a bit tommorow etc.
Trick with credit card is borroved from Fowler "Refactoring" <- great book.
PS there are projects like "death march" where only pushing hard dev. make economic sense (because project will make money only if success in deadline with limited expenses; symptoms are constants work over hours with less dev than required) and my suggestions don't apply to it
I think that in the short term its not a problem for you to devote some people to new development, and some people to cleaning up the new code. However to keep this as an ongoing pattern is flawed. Your aim should be to get code delivered on time and to a high standard and your entire team should be working toward this goal, If you split the team then you have one half of the team writing exciting new features with no concern for code quality, and the other half cleaning up after them. Over time you may find that the cleanup team needs to be significantly bigger than the new features team.
Your first step (which you seem to have made some progress with) is to decide on what sort of quality your code is and how it is going to be measured (test coverage, cyclomatic complexity etc). You can then apply these metrics to your current code and understand how much of it doesn't come up to standard. This code is the code that has accrued 'Technical Debt' which needs repaying.
The next thing to do is to repay this technical debt. This involves bringing this code in line with the standards that you have put in place. This is a tedious task if it has to be done often, however it needs to be done, and to be done by anyone who is free to do it.
Finally make it your main goal to produce code that is in line with your quality measures. If a deadline shows up then you may have to ignore them for a while but once the deadline is met your priority is to get your code back up to the quality you need.
Let's say you have a project that will involve two web applications (that will share DAL/DAO/BO assemblies and some OSS libraries):
a semi complex management application that uses Windows Live ID for authentication and is also capable of communicating with various notifier services (email, sms, twitter etc.), targeted notifiers being about 10% of functionality
a low to semi complex user application with less functionality but more robustness that also uses Windows Live ID for authentication
There are two of us with medium estimation capabilities and we won't be able to do it in two days even if we wanted/have to. At least it would be a far off estimate.
Questions
How long would/does it normally take you to make a reliable/valuable estimate?
What would you suggest to speed-up estimation without sacrificing accuracy?
How much slack (in terms of cost/time) would you add depending on estimation speed (when you would say: I could estimate it a bit more, because I think it's still quite off)
Since we use Agile methods (Scrum, specifically) it takes us about an hour longer than it takes the users to prioritize.
More time doesn't lead to more accuracy.
The hard part, then, is getting the users to prioritize. We hear this discussion all the time "if the whole thing isn't completed on time, it's all worthless." "Except for the XYZZY component, which does have some value." That argument can go on for hours until it's resolved that XYZZY should be first.
Generally, we try to create 4-week sprints. The first few are complicated because there's always something new. After the first two (or three) we seem to set a steady pace.
Each use case has a relatively simple, subjective valuation of how the effort it will take to finish it. Anything over one full sprint in duration has to be decomposed. Most times a few use cases are bundled into a single sprint.
The are formal ways of scoring each use case to better handle the cost and schedule issues. We don't use them because the extra effort doesn't help.
After the first two sprints,
There's new and different functionality,
The priorities have all changed,
The details of each use case have been dramatically revised.
What does "accuracy" mean when the thing you're trying to estimate changes at the end of each sprint?
One lesson learned. Parts of my company spend a long time fully defining exactly what will be delivered, and then measuring that they are delivering precisely what they want.
Customers notice this, and one said we "spend a lot of time delivering what the contract says, but it isn't what we needed."
The problem with firm up-front estimates is that they take on a life of their own. The more you "invest" in the estimating, the more the estimates seem to be a useful deliverable. They aren't useful because they're generally totally wrong. They're based on up-front assumptions that are totally wrong.
It's a bad policy to invest more time in estimating. The "accurate" answers aren't more accurate, but they are more treasured by every layer of management. As you and the customer learn, you invalidate numerous assumptions and you absolutely must re-estimate constantly.
Don't do it up front. If your contract requires you to do it up front, then make sure you have a change control provision and tell the customer that you absolutely will make changes as you go forward. As both you and the customer learn, you both must make changes.
Interesting question. I'm afraid the answer is "it really depends!" I know that's not terribly useful (although it IS true) so here are some factors:
1) Quality and completeness of requirements and their specification. This is, to me, most often the project-estimate killer. If you don't have quality requirements, you have no reasonable basis for an estimate. We use a "RUP-lite" style of product development here, so as the engineering manager I won't give anything but the coarsest-grain estimate until we've completed our "elaboration" phase and gotten sign-off from product management that the core 80% of the product features are in fact accurately covered.
2) The scope & nature of the product. Bigger/more expensive/more complicated = substantially longer to estimate. I've spent years working in telco-carrier land delivering solutions which have the normal robust carrier requirements ("5 9's" of uptime required by SLA mean you must really do a good job of solution design and failure recovery!). In that sort of environment with all the moving parts across functional areas of the business, the estimation of work is going to hinge on getting the whole picture...specifically, cross-functional dependencies and external dependencies can be a REAL killer here. That said, I've also built lots of shrink-wrapped and enterprise software, too. In those environments it's much easier as the scope is typically substantially smaller, so thus easier to estimate.
3) How "new" is this project? How "new" is the team to this product or technology set? The newer the product or team, the longer and more buffer you should allocate.
4) How specific do we need to be? If this is a "rough guess" then I'll lean on my engineering leads to provide a conservative estimate, then I'll pad that. If we need a "real" estimate (e.g., one which is used by my boss and which I'll be responsible for hitting), I'd need the input from a number of different managers and team members, who will need time to analyze the requirements and confer amongst themselves.
That can take as little as a couple days, or weeks..it all depends on the size. "Two or three days" is, frankly, not long enough for sizing anything but the most trivial of projects.
The best thing you can do to improve the quality of your estimates to to improve the quality of your requirements, and be ruthless in identifying hidden dependencies.
One final thing: FWIW, I've been doing this since '81 and I regard accurately estimating a project's duration/cost as the single most difficult/fraught with peril part of engineering management.
To make a reliable estimate you really would need to create a list of tasks to be done. Break it down into stories/tasks (even if you don't use agile) and evaluate them. This can take a lot of time already - especially the amount of research (to look for libraries to do this notifier stuff to reduce the cost). I would take at least 3 days - however 1-2 week(s) sound more reasonable for me. This doesn't seem to be a small project.
I wouldn't dare to speed-up the estimation process if you wan't to have somewhat reasonable results. Estimations are never accurate and you will just make things worse.
An option would be to make a totally rough guess within a few hours and then start to work on it already (for a week or so). At the end of the week you might be able to give a better estimation based on your current progress. However it's important to create a good prototype (which looks like crap but has a little bit of code in all of the regions).
Well, a common quote is "The price will be between 50% and 400% of our initial estimates". The reason that this quote has grown large is that it's true. Estimation accuracy largely depends on your domain-knowledge. If this is the 100th time you sold a given type of blog-engine than you're pretty sure about the estimates. However, more often than not, you don't have that indepth knowledge of the domain (If the app already exists, why create a new one?).
Agile development has become popular because people largely recognize that the traditional "waterfall" type of thinking just doesn't work for most real-world projects. You should apply the same way of thinking to your estimates. Obviously, you need some kind of selling point, but be sure to tell your customers that this information is very vague (and that no matter what any competitor will tell them, their estimates are vague as well).
You need to sell iterations, and therefore also estimate iterations. I'm sure some marketing-guy in any company will know how to handle the paperwork and the legal stuff for this, so it shouldn't be that big a deal.
Consider a 5 iterations project:
Start out by putting up milestones. These are subject to change, but will provide your vargue first estimates for the final product.
Plan iteration #1.
Estimate iteration #1, adjust total estimates accordingly.
Do iteration #1
Rinse / repeat till iteration #5
You're done :)
Reflect over your project. How did your estimates evolve? Reasons? Learn by doing :)
Most customers would rather have part-estimates and part-deliverances than some unrealistic goal that some suit sold them :)
Straying slightly from your original question but it's also important to learn from the estimates that you have produced in the past by keeping information about how long it actually took to do the things that you estimated.
We estimated 5 days to produce these pages, it actually took 10 days, and etc.
In the long term information like this will (hopefully!) enable you to produce more accurate estimates.
Whilst answering “Dealing with awful estimates” posted by Ash I shared a few tips that I learned and personally use to spot weak estimates. But I am certain there must be many more!
What heuristics to use in the scenario when one needs to make a quick evaluation of software project estimate that has been compiled by a third-party (a colleague, a business partner or an external company)?
What are the obvious and not so obvious signs of weak software estimates that can be spotted without much detailed knowledge of task at hand?
A single person having done the estimates, rather than having used consensus based estimation (to fully understand the implied scope of requirements) such as Wideband Delphi.
Especially true if the person doing the estimation is not the person doing the implementation!! - I once worked on a project estimated by someone else as 60 days before any requirements had even been given. Lets just say I was not a happy bunny
No time for documentation.
No time for ramp-up (in terms of learning, and team size).
No list of risks, and their impact to the timescale.
No buffer for the unexpected - in terms of late breaking requirements, and risks.
No one has said it, so I will. The obvious answer is that if you have software schedule estimates then that is a sure sign of unrealistic figures. Yes, there are many methods for estimating software but none of them are accurate in any way, shape or form. What usually happens is that deadlines are set. If the task is over-estimated then extra time is spent making the result better. If the task is under-estimated then something is sacrificed to meet the delivery (like testing and features).
I know this answer isn’t what people want to believe, but estimating is always a guess. More often than not, a developer can’t even predict how much they will accomplish by the end of the day. You are expecting them to guess things months/years down the road on something that they aren’t even sure what is really involved yet.
The only practical answer to your question that isn’t prone to giving unrealistic results would be using a worksheet that comes up with guesses based on previous history at your company. Unfortunately, that will not account for tasks the estimator missed. At least this may give ballpark numbers.
Unless you develop knock offs of the same exact system over and over again, then anyone who thinks they have figured this out is fooling themselves. There are way too many variables involved.
There are two types of estimates: task estimates and project estimates. You can view these as the big and small pictures.
Project estimates are necessarily high level (granularity no smaller than days typically) and must include things like:
High level architecture;
Time for testing;
Ramp up times;
Defect processes;
Time for documentation;
Relevant training;
Assumptions;
Dependencies (eg team A can't do what they need to until team B delivers phase 1);
Critical path (which pieces are likely to determine if the project slips and by how much); and
Risks.
The more of those things that are missing, the more unrealistic (or risky) the estimates.
The second kind of a task estimate, which is typically much lower level. For this kind of estimate it should be simply a task breakdown (with no task being larger than say 5 days).
These don't tend to address the above items but some of them might be relevant, such as assumptions regarding decisions not made yet (eg production hardware). It may also be worth identifying who can and can't do the tasks due to relevant experience, background knowledge or skills (as that person or those persons may end up overcommitted).
Other posts have mentioned the testing time should equal or exceed dev time. I strongly disagree with this. I've seen 8 hour dev tasks result in 100+ hours test time and 80 hour dev tasks result in less than 2 hours of testing. In both cases the testing time was entirely reasonable. The is no absolute correlation between the two. At best, there is a loose connection.
Is the estimate what the management
wanted to be told?
Does the
estimate nicely fit in with the
planned shipment date for the next
release?
Does the management reward
people that give good news more then
people the give bad news?
Was the
estimate prepared before knowing who
would be working on the project?
Did
someone that wanted that bit of
functionally implemented prepare the
estimate?
Is there a history of
software being late?
Is it normal for
developers to be moved onto other
tasks partway though a project?
Have
some or all developers given up on
commenting on bad estimates as a
waste of time?
Count up the number of questions you get “yes” or “maybe” answers.…
If you get mostly “no” answers to the above questions, then it may be worth looking at the estimate in detail to see if it includes the tasks that other people of listed in this thread.
Wow... I really like toolkit's answer.
And I agree that any estimate at all is flawed, because it assumes that the estimator has way more of a clue for how to solve the problem than any estimator actually does when a project gets estimated. However, I think you still need to at least estimate the size of the mountain before you start. Some thought as to whether it's worth trying to do it should precede any endeavor and that's what the essence of an estimate should be.
I did come up with a few more indicators of a dangerous estimate:
No cross-reference - If the estimate can't be validated at least two different ways, it's likely to be unreliable. For example, the last estimates I've done I've been able to break down the work into small feature chunks, and consider how long it's taken our team to do similarly scoped features. Then I was able to look at the sum of these costs and see how big the scope was relative to other projects I've worked on. That's two ways to validate.
The background of the estimator - if this is a software estimate done by a hardware guy who's never written code - be very afraid. More subtle - the closer the estimator's background is to the technology and problem domain of the project, the better.
Detail - as said a few different ways in a few different posts - I like to see detail for both individual features, as well as the tasks needed to complete each feature. Most estimates I see don't show the detail in the formal presentation, but if you ask the person who did the estimate, they should have a file somewhere. Hopefully it's not the back of a paper napkin stained with beer and ketchup. :)
Documented Assumptions - any estimator will have had to make some set of assumptions about the task. These should be documented somewhere, perferably in the formal paperwork. I always get a little worried when I see a short proposal with not many documented assumptions. Either they were thought through and not communicated to the customer, or they were not thought through. I'm not honestly sure which one is worse. It goes without saying that the assumptions should also be reasonable.
Balanced Lifecycle - However the task is broken down, what's the ratio of design, code and test? How about documentation, integration with external systems and post release support? How about those extra things that are so vital (system admins, CM gurus, management effort)?
Slack - I'm sure the corporate daemons of cheapness will come and flay me, but a schedule and a cost should have some slack. If the estimate looks ambitious and agressive to an experienced eye, it is likely to be too low. Estimates are almost never too high.
One good heuristic is to see if test time is roughly the same a development time. That's a good sign for the estimate.
If they can't give you a breakdown of the estimate then that's a bad thing. Usually a sign of lots of little things that may have been forgotten. They don't need to provide the complete original breakdown, just a breakdown like:
requirements
development
testing
packaging and deployment
etc.
They should be using a standard template to calculate their estimate. They don't need a number in every column, but they do the template to list all possible tasks. That way the template can be used to jog peoples's minds when working on the estimate.
If the estimate is overly precise, e.g. 0.25 hour increments, then that, for me, is a bad smell.
If there are things missing like requirements capture, testing, deployment, and handover to any Ops group? If any of those are missing, that's the sort of thing that will come back and bite you.
Edit: One other thing to watch for is the old "perpetually 90% complete" tasks. You get progress update after progress update listing a task as "90% complete". That's not good!
HTH
cheers
Is the compiler of the
estimate available and willing to
discuss it with other senior project
members? If not, that is often a
concern.
Was the estimate sent to the
customer before the experience and
skills of the development staff are
known. Two point estimates may help
but only to some extent.
Before even getting a chance to look at the estimate, you are told that you are committed to delivering all of the functionality described by a specific date.
(Thanks for responding to my question, by the way.)
If you see one or more of these, you may have a bad estimate:
Single point estimates: an estimate should be associated with a range of possible dates or a confidence value
Insufficient granularity of tasks: a large task bucket usually indicates that the functionality is not well understood (which is especially a problem since poorly understood problems are usually under-estimated)
No expression of assumptions and/or risks
Inadequate effort allocated for commonly skipped or underestimated items (e.g. build scripts, documentation, deployment, etc.)
I agree with sateesh, I really like Software Estimation: Demystifying the Black Art by Steve McConnell. He has several checklists which are useful when reviewing and/or preparing estimates.
I totally agree with Dunk, the first sign of bad estimates is the mere presence of a large detailed upfront schedule. Estimates are exactly that, an approximation, otherwise we would call them exactimates. So they should never be used alone in the management of a project.
The most important point to consider is not the accuracy of estimates but the consistency. If a third party were doing estimates for you, then ask to see a history of their successes or failures, speak with their past clients and determine their reliability.
That all being said, from an Agile standpoint, some of the ways we attempt to gain more consistent estimates during a project are;
Use a relative sizing standard (S,M,L,XL) rather than time based (ideal days).
focus on complexity not time
Always use group estimates not single person estimates
Gather estimates frequently throughout the project, generally at the start of each sprint
use feedback from previous sprints in determination of story complexity
track velocity to give meaning to the relative sizing
frequent and early story retrospection to examine/understand any thrashing
If you are dealing with companies that use these estimation methods then, chances are you are going to receive consistent and therefore better results.
Estimates of the form 3, 6, or 12 months (basically any round numbers) reek of guessing. Usually when you guess you pick some round number bigger than you think it will take -- quarters, half a year, etc. -- are the usual suspects. I much prefer estimates in terms of actual development iterations (whatever their size).
What are the obvious and not so
obvious signs of weak software
estimates that can be spotted without
much detailed knowledge of task at
hand?
Estimates which are given without much detailed knowledge of the task at hand are generally not good.
Perhaps a general approach you could take is to check that items in the requirements correspond to those in the estimate. If you want to be very quick check the relative sizes, if there is a 100 word estimate given to a 100,000 word brief it stands no chance of being right.
Also (as others have said) check that analysis, coding, debugging, testing, integration, contingency etc are mentioned. It shows some thought has gone into it.
Having success and sign off criteria at various stages is a great sign. If they have a defined point which is 10% done at least if the estimate is wrong you know early and have a chance to adapt. If there are no checkpoints until “finish” you may not know that you are behind until that date is hit.
How familar is the person giving the estimate with the people doing the work?
I have often seen estimates where there is a generic person doing the work, even though the team is made up of individuals with very different backgrounds. Most likely the tasks and the specialities don't line up perfectly and you get a c++ serverside programmer who ends up doing either your gui or your database... Sometimes the manager of the team doesn't really appreciate the team member's strengths, so if he has been asked to come up with the estimate on his own because his team is busy on the previous project you will find that the work in question is really only suitable for part of the team (not motivating, lack of skills etc)
One other helpful way to evaluate the estimates is to compare it with the actual effort that was spent on previous projects of similar kind. The best data for the comparison would be the effort data of the previous projects that the organization has done. If there is no organizational historical data you can try to benchmark the estimate against industry wide benchmarks.
I would also say if the estimate is presented as single absolute number (say 180 days) then it is not a good sign. A single absolute number would indicate that the estimate is that the task will be finished with 100% probability on the given data. The estimates presented in a range (say 130 to 180 days) would indicate that the range in which task could be completed.
Much of what I have written above I attribute it to the book :
Software Estimation: Demystifying the Black Art
by Steve McConnell
I check the estimates against the man-power. Although not a very accurate heuristic, it's clear if some massive work has just one or two devs assigned to it, that the task was not estimated correctly
A good estimate will have a good breakdown, involving all phases of the project.
It will almost certainly not finish at a convenient date for the business.
It will include risks of various sorts.
It will be presented in terms of confidence intervals, either implicitly (10-12 months) or by using large units (about four quarters).
It will be made by somebody with responsibility for getting the project done, preferably more than one such person.
If there are delays at the start, there will be delays at the end (expressed as 10-12 months from start, or about 1Q2010 if we start now, not something like January 2010 when the project hasn't started yet).
Assumptions and dependencies will be clearly and prominently listed.
Edit: Part of this depends on the stage the project is in. An early but precise estimate is a warning sign, particularly if there is no confidence interval assigned. That reeks of a Procrustean estimate.
Also, watch for other development methodologies. A timeboxed project can have a rigid and arbitrary schedule, but the feature set will be flexible.
Any of the following:
It is one big project and there isn't a short high level strategy described
There isn't a clear, short and concise vision of what wants to be achieve with the project
The project isn't structured around business value being delivered gradually
The team is trying to give "accurate" estimates for a big project, going into (or was done with) a long analysis phase? (changes will come, and will usually affect those estimates in way that can't be easily quantified without yet more big efforts)
There are "detailed" estimates for the whole project (related to previous)
There aren't detailed estimates for the first phase, or there is something wrong in those.
"Four to six weeks", when not accompanied with a breakdown into shorter tasks...
What techniques do people recommend to track the quality level of a new program? Are their ways to take a poorly defined term like "quality level", quanitify it and then make predictions? Currently I use bug rates and S curves but I am looking for other ways to evaluate, estimate and predict quality levels.
Are you looking for this measure?
More seriously, for me code quality is about maintainability:
how easy it is to fix a bug, to add/remove/modify a feature,
how easy it is to refactor: is a regression test suite available?
how much technical debt?
Remember that you write code once, but you read it several times.
I am not sure what counting your bugs is going to do with out something to compare it with. What if the software you made was very hard and had lots of edge cases? You need some kind some kind of comparison...
Even though one of the other answers was clearly a joke code reviews are also probably a good idea. If you have too many bugs hire better engineers or have them write less code.
Edit: Added after considering comments...
Every bug is a like a unique sucky snow flake (a suckflake?). They have different levels of impact on your customers and developers. I would at least take this into account. Maybe adding severity (a measure of customer push for fix) and engineering hours spent fixing it might help improve accuracy. My concern I guess is that this is still over simplification of "quality" when it comes to developing software.
Sadly software quality != product quality. A recently released game called Fallout 3 won tons of awards and made lots of money (at least I assume) but was also a buggy hunk of junk on PC at least.
Just make sure you are tracking and optimizing the correct thing. Tracking # bugs vs time is just tracking # of bugs vs time. Reading any more into it requires some level of assumption however correct or incorrect.
What are your goals? Bugs are but one part of software quality. If you want to continue you to support your software maintainability is key. A lot of times bug fixes can make things less maintainable if your coder did things in a rush. This in turn makes future fixes and features harder and fixes can add new bugs.
This depends a lot on what kind of comparisons you're trying to make. If you're looking at a single project over time and the team doesn't change, then bug rates might be meaningful. However, if you're comparing different projects, with different teams, there's really no way to compare things like bug rates, because you are really comparing the rate of known bugs. One team may be much better at identifying bugs than the other, making their bug rate look higher, but they're really the ones with better software quality.
Do you do unit testing?
Code coverage on unit tests can be a decent measure of quality.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Every time I have to estimate time for a project (or review someone else's estimate), time is allotted for testing/bug fixing that will be done between the alpha and production releases. I know very well that estimating so far into the future regarding a problem-set of unknown size is not a good recipe for a successful estimate. However for a variety of reasons, a defined number of hours invariably gets assigned at the outset to this segment of work. And the farther off this initial estimate is from the real, final value, the more grief those involved with the debugging will have to take later on when they go "over" the estimate.
So my question is: what is the best strategy you have seen with regards to making estimates like this? A flat percentage of the overall dev estimate? Set number of hours (with the expectation that it will go up)? Something else?
Something else to consider: how would you answer this differently if the client is responsible for testing (as opposed to internal QA) and you have to assign an amount of time for responding to the bugs that they may or may not find (so you need to figure out time estimates for bug fixing but not for testing)
It really depends on a lot of factors. To mention but a few: the development methodology you are using, the amount of testing resource you have, the number of developers available at this stage in the project (many project managers will move people onto something new at the end).
As Rob Rolnick says 1:1 is a good rule of thumb- however in cases where a specification is bad the client may push for "bugs" which are actually badly specified features. I was recently involved in a project which used many releases but more time was spent on bug fixing than actual development due to the terrible specification.
Ensure a good specification/design and your testing/bug fixing time will be reduced because it will be easier for testers to see what and how to test and any clients will have less lee-way to push for extra features.
Maybe I just write buggy code, but I like having a 1:1 ratio between devs and tests. I don't wait until alpha to test, but rather do it throughout the whole project. The logic? Depending on your release schedule, there could be a good deal of time between when development starts and when your alpha, beta, and ship dates are. Furthermore, the earlier you catch bugs, the easier (and cheaper) they are to fix.
A good tester, who find bugs soon after each check-in, is invaluable. (Or, better yet, before a check-in from a PR or DPK) Simply put, I am still extremely familiar with my code, so most bug fixes become super simple. With this approach, I tend to leave roughly 15% of my dev time to bug fixing. At least when I do estimates. So in a 16 week run I'd leave around 2-3 weeks.
Only a good amount of accumulated statistics from previous projects can help you to give precise estimates. If you have a well defined set of requirements, you can make a rough calculation of how many use cases you have. As I said you need to have some statistics for your team. You need to know average bugs-per-loc number to estimate total bugs count. If you don't have such numbers for your team, you can use industry average numbers. After you have estimated LOC (number of use cases * NLOC) and average bugs-per-lines, you can give more or less accurate estimation on time required to release project.
From my practical experience, time spent on bug-fixing is equal to or more (in 99% cases :) ) than time spent on original implementation.
From the testing Bible:
Testing Computer Software
p. 31: "Testing [...] accounts for 45% of initial development of a product." A good rule of thumb is thus to allocate about half of your total effort to testing during initial development.
Use a language with Design-by-Contract or "Code-contracts" (preconditions, check assertions, post-conditions, class-invariants, etc) to get "testing" as close to your classes and class features (methods and properties) as possible. Then use TDD to test your code with its contracts.
Use as much self-built code-generation as you possibly can. Generated code is proven, predictable, easier to debug, and easier/faster to fix than all-hand-coded code. Why write what you can generate? However, do not use OPG (other-peoples-generators)! Code YOU generate is code you control and know.
You can expect to spend an inverting ratio over the course of your project--that is--you will write lots of hand-code and contracts in the start (1:1) of your project. As you see patterns, teach a code generator YOU WRITE to generate the code for you and reuse it. The more you generate, the less you design, write, debug, and test. By the end of the project, you will find that your equation has inverted: You're writing less of your core-code, and your focus shifts to your "leaf-code" (last-mile) or specialized (vs generalized and generated) code.
Finally--get a code analyzer. A good, automated code analysis rule system and engine will save you oodles of time finding "stupid-bugs" because there are well-known gotchas in how people write code in particular languages. In Eiffel, we now have Eiffel Inspector, where we not only use the 90+ rules coming with it, but are learning to write our own rules for our own discovered "gotchas". Such analyzers not only save you in terms of bugs, but enhance your design--even GREEN programmers "get it" rather quickly and stop making rookie mistakes earlier and learn faster!
The rule of thumb for rewriting existing systems is this: "If it took 10 years to write, it will take 10 years to re-write." In our case, using Eiffel, Design-by-Contract, Code Analysis, and Code Generation, we have re-written a 14 year system in 4 years and will fully deliver in 4 1/2. The new system is about 4x to 5x more complex than the old system, so this is saying a lot!