Calculate code metrics [closed] - metrics

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
Are there any tools available that will calculate code metrics (for example number of code lines, cyclomatic complexity, coupling, cohesion) for your project and over time produce a graph showing the trends?

On my latest project I used SourceMonitor. It's a nice free tool for code metrics analysis.
Here is an excerpt from SourceMonitor official site:
Collects metrics in a fast, single
pass through source files.
Measures metrics for source code
written in C++, C, C#, VB.NET, Java,
Delphi, Visual Basic (VB6) or HTML.
Includes method and function level
metrics for C++, C, C#, VB.NET,
Java, and Delphi.
Saves metrics in checkpoints for
comparison during software
development projects.
Displays and prints metrics in
tables and charts.
Operates within a standard Windows
GUI or inside your scripts using XML
command files.
Exports metrics to XML or CSV
(comma-separated-value) files for
further processing with other tools.
For .NET beside NDepend which is simply the best tool, I can recommend vil.
Following tools can perform trend analysis:
CAST
Klocwork Insight

Sonar is definitively a tool that you must consider, especially for Java projects. However it will also handle PHP or C/C++, Flex and Cobol code.
Here is a screenshot that show some metrics on a project:
alt text http://sonar.codehaus.org/wp-content/uploads/2009/05/squid-metrics.png
Note that you can try the tool by using their demo site at http://nemo.sonarsource.org

NDepend for .net

I was also looking for a code metrics tool/plugin for my IDE but as far as I know there are none (for eclipse that is) that also show a graph of the complexity over a specified time period.
However, I did find the eclipse metrics plugin, it can handle:
McCabe's Cyclomatic Complexity
Efferent Couplings
Lack of Cohesion in Methods
Lines Of Code in Method
Number Of Fields
Number Of Levels
Number Of Locals In Scope
Number Of Parameters
Number Of Statements
Weighted Methods Per Class
And while using it, I didn't miss the graphing option you are seeking as well.
I think that, if you don't find any plugins/tools that can handle the graphing over time, you should look at the tool that suits you most and offers you all the information you need; even if the given information is only for the current build of your project.
As a side note, the eclipse metrics plugin allows you to export the data to an external file (link goes to an example), so if you use a source control tool, and you should!, you can always export the data for the specific build and store the file along with the source code, that way you still have a (basic) way to go back in time and check the differences.

keep in mind, What you measure is what you get. loc says nothing about productivity or efficency.
rate a programmer by lines of code and you will get.. lines of code.
the same argument goes for other metrics.
otoh.. http://www.crap4j.org/ is a very conservative and useful metric. it sets complexity in relation with coverage.

NDepend, I am using it and its best for this purpose.
Check this :
http://www.codeproject.com/KB/dotnet/NDepend.aspx

Concerning the tool NDepend it comes with 82 different code metric, from Number of Lines of Code, to Method Rank (popularity), Cyclomatic Complexity, Lack of Cohesion of Methods, Percentage Coverage (extracted from NCover or VSTS), Depth of Inheritance...
With its rule system, NDepend can also find issues and estimates technical debt which is an interesting code metric (amount of dev-effort to fix problems vs. amount of dev-time spoiled per year to let problems unfixed).
All these metrics are detailled here.

If you're in the .NET space, Developer Express' CodeRush provides LOC, Cyclomatic Complexity and the (rather excellent, IMHO) Maintenance Complexity analysis of code in real-time.
(Sorry about the Maintenance Complexity link; it's going to Google's cache. The original seems to be offline ATM).

Atlassian FishEye is another excellent tool for the job. It integrates with your source control system (currently supports CVS, SVN and Perforce), and analyzes all your files that way. The analysis is rather basic though, and the product itself is commercial (but very reasonably priced, IMO).
You can also get an add-on for it called Crucible that facilitates peer code reviews.

For Visual Studio .NET (at least C# and VB.NET) I find the free StudioTools to be extremely useful for metrics. It also adds a number of features found in commercial tools such as ReSharper.

Code Analyzer is simple tool which generates this kind of metrics.
(source: teel.ws)

For Python, pylint can provide some code quality metrics.

There's also a code metrics plugin for reflector, in case you are using .NET.

I would recommend Code Metrics Viewer Exention for visual studio.
It is very easy to analyze solution at once, also do comparison if you made progress ;-)
Read more here about the features

On the PHP front, I believe for example phpUnderControl includes metrics through phpUnit (if I am not mistaken).
Keep in mind that metrics are often flawed. For example, a coder who's working on trivial problems will produce more code and there for look better on your graphs, than a coder who's cracking the complex issues.

If you're after some trend analysis, does it really mean anything to measure beyond SLOC?
Even if you just doing a grep for trailing semi-colons and counting the number of lines returned, what you are after is consistency in the SLOC measurement technique. In this way today's measurement can be compared with last month's measurement in a meaningful way.
I can't really see what would a trend of McCabe Cyclometric Complexity give? I think that CC should be used more for a snapshot of quality to provide feedback to the developers.
Edit: Ooh. Just thought of a couple of other measurements that might be useful. Comments as a percentage of SLOC and test coverage. Neither of which you want to let slip. Coming back to retrofit either of these is never as god as doing them "in the heat of the moment!"
HTH.
cheers,
Rob

Scitools' Understand does have the capability to generate a lot of code metrics for you. I don't have a lot of experience with the code metrics features, but the static analysis features in general were nice and the price was very reasonable. The support was excellent.

Project Code Meter gives a differential development history report (in Excel format) which shows your coding progress metrics in SLOC, time and productivity percentage (it's time estimation is based on cyclomatic complexity and other metrics). Then in Excel you can easily produce the graph you want.
see this article which describes it step by step:
http://www.projectcodemeter.com/cost_estimation/help/FN_monsizing.htm

For Java you can try our tool, QualityGate that computes more than 60 source code metrics, tracks all changes through time and also provides an overall rating for the maintainability of the source code.

Related

Choosing a strategy for BI module

The company I work for produces a content management system (CMS) with different various add-ons for publishing, e-commerce, online printing, etc. We are now in process of adding "reporting module" and I need to investigate which strategy should be followed. The "reporting module" is otherwise known as Business Intelligence, or BI.
The module is supposed to be able to track item downloads, executed searches and produce various reports out of it. Actually, it is not that important what kind of data is being churned as in the long term we might want to be able to push whatever we think is needed and get a report out of it.
Roughly speaking, we have two options.
Option 1 is to write a solution based on Apache Solr (specifically, using https://issues.apache.org/jira/browse/SOLR-236). Pros of this approach:
free / open source / good quality
we use Solr/Lucene elsewhere so we know the domain quite well
total flexibility over what is being indexed as we could take incoming data (in XML format), push it through XSLT and feed it to Solr
total flexibility of how to show search results. Similar to step above, we could have custom XSLT search template and show results back in any format we think is necessary
our frontend developers are proficient in XSLT so fitting this mechanism for a different customer should be relatively easy
Solr offers realtime / full text / faceted search which are absolutely necessary for us. A quick prototype (based on Solr, 1M records) was able to deliver search results in 55ms. Our estimated maximum of records is about 1bn of rows (this isn't a lot for typical BI app) and if worse comes to worse, we can always look at SolrCloud, etc.
there are companies doing very similar things using Solr (Honeycomb Lexicon, for example)
Cons of this approach:
SOLR-236 might or might not be stable, moreover, it's not yet clear when/if it will be released as a part of official release
there would possibly be some stuff we'd have to write to get some BI-specific features working. This sounds a bit like reinventing the wheel
the biggest problem is that we don't know what we might need in the future (such as integration with some piece of BI software, export to Excel, etc.)
Option 2 is to do an integration with some free or commercial piece of BI software. So far I have looked at Wabit and will have a look at QlikView, possibly others. Pros of this approach:
no need to reinvent the wheel, software is (hopefully) tried and tested
would save us time we could spend solving problems we specialize in
Cons:
as we are a Java shop and our solution is cross-platform, we'd have to eliminate a lot of options which are in the market
I am not sure how flexible BI software can be. It would take time to go through some BI offerings to see if they can do flexible indexing, real time / full text search, fully customizable results, etc.
I was told that open source BI offers are not mature enough whereas commercial BIs (SAP, others) cost fortunes, their licenses start from tens of thousands of pounds/dollars. While I am not against commercial choice per se, it will add up to the overall price which can easily become just too big
not sure how well BI is made to work with schema-less data
I am definitely not be the best candidate to find the most approprate integration option in the market (mainly because of absence of knowledge in BI area), however a decision needs to be done fast.
Has anybody been in a similar situation and could advise on which route to take, or even better - advise on possible pros/cons of the option #2? The biggest problem here is that I don't know what I don't know ;)
I have spent some time playing with both QlikView and Wabit, and, have to say, I am quite disappointed.
I had an expectation that the whole BI industry actually has some science under it but from what I found this is just a mere buzzword. This MSDN article was actually an eye opener. The whole business of BI consists of taking data from well-normalized schemas (they call it OLTP), putting it into less-normalized schemas (OLAP, snowflake- or star-type) and creating indices for every aspect you want (industry jargon for this is data cube). The rest is just some scripting to get the pretty graphs.
OK, I know I am oversimplifying things here. I know I might have missed many different aspects (nice reports? export to Excel? predictions?), but from a computer science point of view I simply cannot see anything beyond a database index here.
I was told that some BI tools support compression. Lucene supports that, too. I was told that some BI tools are capable of keeping all index in the memory. For that there is a Lucene cache.
Speaking of the two candidates (Wabit and QlikView) - the first is simply immature (I've got dozens of exceptions when trying to step outside of what was suggested in their demo) whereas the other only works under Windows (not very nice, but I could live with that) and the integration would likely to require me to write some VBScript (yuck!). I had to spend a couple of hours on QlikView forums just to get a simple date range control working and failed because the Personal Edition I had did not support downloadable demo projects available on their site. Don't get me wrong, they're both good tools for what they have been built for, but I simply don't see any point of doing integration with them as I wouldn't gain much.
To address (arguable) immatureness of Solr I will define an abstract API so I can move all the data to a database which supports full text queries if anything goes wrong. And if worse comes to worse, I can always write stuff on top of Solr/Lucene if I need to.
If you're truly in a scenario where you're not sure what you don't know i think it's best to explore an open-source tool and evaluate its usefulness before diving into your own implementation. It could very well be that using the open-source solution will help you further crystallise your own understanding and required features.
I had worked previously w/ an open-source solution called Pentaho. I seriously felt that I understood a whole lot more by learning to use Pentaho's features for my end. Of course, as is the case of working w/ most of the open-source solutions, Pentaho seemed to be a bit intimidating at first, but I managed to get a good grip of it in a month's time. We also worked with Kettle ETL tool and Mondrian cubes - which I think most of the serious BI tools these days build on top of.
Earlier, all these components were independent, but off-late i believe Pentaho took ownership of all these projects.
But once you're confident w/ what you need and what you don't, I'd suggest building some basic reporting tool of your own on top of a mondrian implementation. Customising a sophisticated open-source tool can indeed be a big issue. Besides, there are licenses to be wary of. I believe Pentaho is GPL, though you might want to check on that.
First you should make clear what your reports should show. Which reporting feature do you need? Which output formats do you want? Do you want show it in the browser (HTML) or as PDF or with an interactive viewer (Java/Flash). Where are the data (database, Java, etc.)? Do you need Ad-Hoc reporting or only some hard coded reports? This are only some questions.
Without answers to this question it is difficult to give a real recommendation, but my general recommendation would be i-net Clear Reports (used to be called i-net Crystal-Clear). It is a Java tool. It is a commercial tool but the cost are lower as SAP and co.

Evaluation of Code Metrics [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed last year.
Improve this question
There has been a considerable amout of discussion about code metrics (e.g.: What is the fascination with code metrics?). I (as a software developer) am really interested in those metrics because I think that they can help one to write better code. At least they are helpful when it comes to finding areas of code that need some refactoring.
However, what I would like to know is the following. Are there some evaluations of those source code metrics that prove that they really do correlate with the bug-rate or the maintainability of a method. For example: Do methods with a very high cyclomatic-complexity really introduce more bugs than methods with a low complexity? Or do methods with a high difficulty level (Halstead) really need much more amount to maintain them than methods with a low one?
Maybe someone knows about some reliable research in this area.
Thanks a lot!
Good question, no straight answer.
There are research papers available that show relations between, for example, cyclomatic complexity and bugs. The problem is that most research papers are not freely available.
I have found the following: http://www.pitt.edu/~ckemerer/CK%20research%20papers/CyclomaticComplexityDensity_GillKemerer91.pdf. Though it shows a relation between cyclomatic complexity and productivity. It has a few references to other papers however, and it is worth trying to google them.
Here are some:
Object-oriented metrics that predict maintainability
A Quantitative Evaluation of Maintainability Enhancement by Refactoring
Predicting Maintainability with Object-Oriented Metrics - An Empirical Comparison
Investigating the Effect of Coupling Metrics on Fault Proneness in Object-Oriented Systems
The Confounding Effect of Class Size on the Validity of Object-Oriented Metrics
Have a look at this article from Microsoft research. In general I'm dubious of development wisdom coming out of Microsoft, but they do have the resources to be able to do long-term studies of large products. The referenced article talks about the correlation they've found between various metrics and project defect rate.
Finally I did find some papers about the correlation between software metrics and the error-rate but none of them was really what I was looking for. Most of the papers are outdated (late 80s or early 90s).
I think that it would be quite a good idea to start an analysis of current software. In my opinion it should be possible to investigate some populare open source systems. The source code is available and (what I think is much more important) many projects use issue trackers and some kind of version control system. Probably it would be possible to find a strong link between the log of the versioning systems and the issue trackers. This would lead to a very interesting possibility of analyzing the relation between some software metrics and the bug rate.
Maybe there still is a project out there that does exactly what I've described above. Does anybody know about something like that?
We conducted an empirical study about the bug prediction capabilities of the well-known Chidamber and Kemerer object-oriented metrics. It turned out these metrics combined can predict bugs with an accuracy of above 80% when we applied proper machine learning models. If you are interested, you can ready the full study in the following paper:
"Empirical Validation of Object-Oriented Metrics on Open Source Software for Fault Prediction. In IEEE Transactions on Software Engineering, Vol. 31, No. 10, October 2005, pages 897-910."
I too was once fascinated with the promises of code metrics for measuring likely quality, and discovering how long it would take to write a particular piece of code given its design complexity. Sadly, the vast majority of claims for metrics were hype and never bore any fruit.
The largest problem is that the outputs we want to know (quality, time, $, etc.) depend on too many factors that cannot all be controlled for. Here is just a partial list:
Tool(s) Operating system
Type of code (embedded, back-end, GUI, web)
Developer experience level
Developer skill level
Developer background
Management environment
Quality focus
Coding standards
Software processes
Testing environment/practices
Requirements stability
Problem domain (accounting/telecom/military/etc.)
Company size/age
System architecture
Language(s)
See here for a blog that discusses many of these issues, giving sound reasons for why the things we have tried so far have not worked in practice. (Blog is not mine.)
https://shape-of-code.com
This link is good, as it deconstructs one of the most visible metrics, the Maintainability Index, found in Visual Studio:
https://avandeursen.com/2014/08/29/think-twice-before-using-the-maintainability-index/
See this paper for a good overview of quite a large number of metrics, showing that they do not correlate well with program understandability (which itself should correlate with maintainability): "Automatically Assessing Code Understandability: How Far Are We?", by Scalabrino et al.

Software quality metrics [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
I was wondering if anyone has experience in metrics used to measure software quality. I know there are code complexity metrics but I'm wondering if there is a specific way to measure how well it actually performs during it's lifetime. I don't mean runtime performance, but rather a measure of the quality. Any suggested tools that would help gather these are welcome too.
Is there measurements to answer these questions:
How easy is it to change/enhance the software, robustness
If it is a common/general enough piece of software, how reusable is it
How many defects were associated with the code
Has this needed to be redesigned/recoded
How long has this code been around
Do developers like how the code is designed and implemented
Seems like most of this would need to be closely tied with a CM and bug reporting tool.
If measuring code quality in the terms you put it would be such a straightforward job and the metrics accurate, there would probably be no need for Project Managers anymore. Even more, the distinction between good and poor managers would be very small. Because it isn't, that just shows that getting an accurate idea about the quality of your software, is no easy job.
Your questions span to multiple areas that are quantified differently or are very subjective to quantification, so you should group these into categories that correspond to common targets. Then you can assign an "importance" factor to each category and derive some metrics from that.
For instance you could use static code analysis tools for measuring the syntactic quality of your code and derive some metrics from that.
You could also derive metrics from bugs/lines of code using a bug tracking tool integrated with a version control system.
For measuring robustness, reuse and efficiency of the coding process you could evaluate the use of design patterns per feature developed (of course where it makes sense). There's no tool that will help you achieve this, but if you monitor your software growing bigger and put numbers on these it can give you a pretty good idea of how you project is evolving and if it's going in the right direction. Introducing code-review procedures could help you keep track of these easier and possibly address them early in the development process. A number to put on these could be the percentage of features implemented using the appropriate design patterns.
While metrics can be quite abstract and subjective, if you dedicate time to it and always try to improve them, it can give you useful information.
A few things to note about metrics in the software process though:
Unless you do them well, metrics could prove to be more harm than good.
Metrics are difficult to do well.
You should be cautious in using metrics to rate individual performance or offering bonus schemes. Once you do this everyone will try to cheat the system and your metrics will prove worthless.
If you are using Ruby, there are some tools to help you out with metrics ranging from LOCs/Method and Methods/Class Saikuros Cyclomatic complexity.
My boss actually held a presentation on software metric we use at a ruby conference last year, these are the slides.
A interesting tool that brings you a lot of metrics at once is metric_fu. It checks alot of interesting aspects of your code. Stuff that is highly similar, changes a lot, has a lot of branches. All signs your codes could be better :)
I imagine there are lot more tools like this for other languages too.
There is a good thread from the old Joel on Software Discussion groups about this.
I know that some SVN stat programs provide an overview over changed lines per submit. If you have a bugtracking system and persons fixing bugs adding features etc are stating their commit number when the bug is fixed you can then calculate how many line were affected by each bug/new feature request. This could give you a measurement of changeability.
The next thing is simply count the number of bugs found and set them in ratio to the number of code lines. There are some values how many bugs a high quality software should have per codeline.
You could do it in some economic way or in programmer's way.
In case of economic way you mesaure costs of improving code, fixing bugs, adding new features and so on. If you choose the second way, you may want to measure how much staff works with your program and how easy it is to, say, find and fix an average bug in human hours. Certainly they are not flawless, because costs depend on the market situation and human hours depend on the actual people and their skills, so it's better to combine both methods.
This way you get some instruments to mesaure quality of your code. Of course you should take into account the size of your project and other factors, but I hope main idea is clear.
A more customer focused metric would be the average time it takes for the software vendor to fix bugs and implement new features.
It is very easy to calculate, based on your bug tracking software's date created, and closed information.
If your average bug fixing/feature implementation time is extremely high, this could also be an indicator for bad software quality.
You may want to check the following page describing various different aspects of software quality including sample plots. Some of the quality characteristics you require to measure can be derived using tool such as Sonar. It is very important to figure out how would you want to model some of the following aspects:
Maintainability: You did mention about how easy is it to change/test the code or reuse the code. These are related with testability and re-usability aspect of maintainability which is considered to be key software quality characteristic. Thus, you could measure maintainability as a function of testability (unit test coverage) and re-usability (cohesiveness index of the code).
Defects: Defects alone may not be a good idea to measure. However, if you can model defect density, it could give you a good picture.

Where can i find sample alogrithms for analyzing historical stock prices?

Can anyone direct me in the right direction?
Basically, I'm trying to analyze stock prices and see if I can spot any patterns. I'm using PHP and MySQL to do this. Where can I find sample algorithms like the ones used in MetaStock or thinkorswim? I know they are closed source, but are there any tutorials available for beginners?
Thank you,
P.S. I don't even know what to search for in google :(
A basic, educational algorithm to start with is a dual-crossover moving average. Simply chart fast (say, 5-day) and slow (say, 10-day) moving averages of a stock's closing price, and you have a weak predictor of when to buy long (fast line goes above slow) and sell short (slow line goes above the fast). After getting this working, you could implement exponential smoothing (see previously linked wiki article).
That would be a decent start. Take a look at other technical analysis techniques, but do keep in mind that this is quite a perilous method of trading.
Update: As for actually implementing this? You're a PHP programmer, so here is a charting library for PHP. This is the one I used a few years ago for this very project, and it worked out swimmingly. Maybe someone else can recommend a better one. If you need a free source of data, take a look at Yahoo! Finance's historical data. They dispense CSV files containing daily opening prices, closing prices, trading volume, etc. of virtually every indexed corporation.
Check out algorithms at investopedia and FM Labs has formulas for a lot of technical analysis indicators.
First you will need a solid math background : statistics in general, correlation analysis, linear algebra... If you really want to push it check out dimensional transposition. Then you will need solid basis in Data Mining. Associations can be useful if yo want to link strict numerical data with news headlines and other events.
One thing for sure you will most likely not find pre-digested algorithms out there that will make you rich...
I know someone who is trying just that... He is somewhat successful (meaning is is not loosing money and is making a bit) and making his own algorithms... I should mention he has a doctorate in Actuarial science.
Here are a few more links... hope they help out a bit
http://mathworld.wolfram.com/ActuarialScience.html
http://www.actuary.com/actuarial-science/
http://www.actuary.ca/
Best of luck to you
Save yourself time and use programs like NinjaTrader and Wealth-Lab. Both of them are great technical analysis platforms and accept C# as a programming language for defining your trading rules. Every possible technical indicator you can imagine is already included and if you need something more advanced you can always write your own indicator. You would also need a lot of data in order for your analysis to be statistically significant. For US stocks and ETFs, visit www.Kibot.com. We have good experience using their data.
Here's a pattern for ya
http://ddshankar.files.wordpress.com/2008/02/image001.jpg
I'd start with a good introduction to time series analysis and go from there. If you're interested in finding patterns then the interesting term is "1D-Pattern Matching". But for that you need nice features, so google for "Feature extraction in time series". Remember GiGo. So make sure you have error-free stock price data for a sufficiently long timeperiod before you start.
May I suggest that you do a little reading with respect to the Kalman filter? Wikipedia is a pretty good place to start:
http://en.wikipedia.org/wiki/Kalman_filter/
This should give you a little background on the problem of estimating and predicting the variables of some system (the stock market in this case).
But the stock market is not very well behaved so you may want to familiarize yourself with non linear extensions to the KF. Yes, the wikipedia entry has sections on the extended KF and the unscented KF, but here is an introduction that is just a little more in-depth:
http://cslu.cse.ogi.edu/nsel/ukf/
I suppose if anyone had ever tried this before then it would have been all over the news and very well known. So you may very well be on to something.
Use TradeStation
It is a platform that lets you write software to analyze historical stock data. You can even write programs that would trade the stock, and you can back test your program on historical data or run it real time through out the day.

What is statistical debugging?

What is statistical debugging? I haven't found a clear, concise explanation yet, but the term certainly sounds impressive.
Is it just a research topic, or is it being used somewhere, for actual development? In other words: Will it help me find bugs in my program?
I created statistical debugging, along with various wonderful collaborators across the years. I wish I'd noticed your question months ago! But if you are still curious, perhaps this late answer will be better than nothing.
At a very high level, statistical debugging is the idea of using statistical models of program success/failure to track down bugs. These statistical models expose relationships between specific program behaviors and eventual success or failure of a run. For example, suppose you notice that there's a particular branch in the program that sometimes goes left, sometimes right. And you also notice that runs where the branch goes left are fine, but runs where the branch goes right are 75% more likely to crash. So there's a statistical correlation here which may be worth investigating more closely. Statistical debugging formalizes and automates that process of finding program (mis)behaviors that correlate with failure, thereby guiding developers to the root causes of bugs.
Getting back to your original question:
Is it just a research topic, or is it being used somewhere, for actual development?
It is mostly a research topic, but it is out there in the "real" world in two ways:
The public deployment of the Cooperative Bug Isolation Project hunts for bugs in various Open Source programs running under Fedora Linux. You can download pre-instrumented packages and every time you use them you're feeding us data to help us find bugs.
Microsoft has released Holmes, an implementation of statistical debugging for .NET. It's nicely integrated into Visual Studio and should be a very easy way for you to use statistical debugging to help find your own bugs in your own code. I've worked closely with Microsoft Research on Holmes, and these are good smart people who know how to put out high-quality tools.
One warning to keep in mind: statistical debugging needs ample raw data to build good statistical models. In CBI's public deployment, that raw data comes from real end users. With Holmes, I think Microsoft assumes that the raw data will come from in-house automated unit tests and manual tests. What won't work is code with no runs at all, or with only failing runs but no successful counterexamples. Statistical debugging works off of the contrast between good and bad runs, so you need to feed it both. If you want bug-hunting tools without runs, then you'll need some sort of static analysis. I do research on that too, but that's not statistical debugging. :-)
I hope this helped and was not too long. I'm happy to answer any follow-up questions. Happy bug-hunting!
that's when you ship software saying "well, it probably works..."
;-)
EDIT: it's a research topic where machine learning and statistical clustering are used to try to find patterns in programs that are good predictors of bugs, to identify where more bugs are likely to hide.
It sounds like statistical sampling. When you buy a product, there's a good chance that not every single product coming off the "assembly line" has been checked for quality.
Statistical sampling calls for checking a certain percentage of products to almost ensure they're all problem-free. It minimizes the effort at the risk of some problems sneaking through and is absolutely necessary where the testing process is a destructive one - if you carry out destructive testing on 100% of your production line, that's not going to leave much for distribution :-)
To be honest, unless you're checking every single execution path and every single possible input value, you're already doing this in your testing. The amount of effort required to test everything for any but the most simplistic systems is not worth it. The extra cost would make your product a non-compete item.
Note that statistical sampling doesn't just involve testing every 100th unit. There are ways to target the sampling to improve the chance of catching problems. For example, if historical data suggests most errors are introduced at a specific phase, target that phase. If one of your developers is more problematic than others, check his stuff more closely.
From what I can see from a cursory glance at some research papers, statistical debugging is just that - targeting areas based on past history of problems.
I know we already do this for our software. Since any bugs that get fixed have to pass unit and system tests that replicate the problem (and our TDD says these tests should be written before trying to fix the bug), those tests are automatically added to the regression test suite so that those areas that cause more problems naturally are tested more often in the future.

Resources