Anyone used the ABC Metric for measuring an application's size? - metrics

There are some nice things about it (like it encapsulates the concept of Cyclomatic complexity), and I was wondering if anyone has used it in "real life". If so, what are your experiences? Is it a useful measure of size (as opposed to KLOC or Function Points)?
For those wondering what I'm smoking:
Here's a link to some info on it: http://c2.com/cgi/wiki/Wiki?AbcMetric

The application's sheer 'size' could be safely measured in LOCs or any other metrics you could think of as long as you use the same approach across all of your application.
However the size on its own really matters only when you're talking about re-factoring and maintainance of the code base. It's almost mandatory to use the size metrics are useful in conjuction with the coverage statistics.
But most of the time Function Points or similar concepts give you much better view of how big your application really is.
I.e. as an example if it has 10 FP it's tiny, if it has 200 it's probably big.
But if it has 100 KLOCs what does it tell me on its own, aside from the fact that I'll probably spend some time reading those lines? Almost nothing, I have to take an enourmous amount of other factors into an account to be able to understand this metric.
Obviously the FPs have a significant downside of being expensive to properly calculate.

Related

Why is average so popular when measuring application performance

When measuring application performance (response time for example) it's so easy to come across averages (mean). ab, httpref and bunch of other utilities are reporting mean and standard deviation. But from theoretical point of view it doesn't make a lot of sense to me. And there is why.
Mean value is good at describing symmetrical distributed population, because in case of symmetrical distribution mean is equal to population mode and expected value. But response times are not distributed symmetrical. They are more like exponential. In this case average tells us nothing.
It's more convenient to work with percentile values, which tells us what response time we could afford in what percentage of responses.
Am I missing something or mean is popular just because it's very simple to calculate?
All kinds of tools get their features not necessarily from what makes sense, but from users' expectations.
You're absolutely right that the distributions are non-negative and heavily skewed, and that percentiles would be more informative.
Alternatively, a distribution more like lognormal or chi-square would be a little better.
Yes, you are missing something.
The whole point of descriptive statistics is to present a few numbers to describe (or represent or model or ...) a large number of numbers. They aid the comprehension of large datasets, the extraction of information from data, the approximate comparison of datasets whose exact comparison is large and bewildering to the limitations of the human mind.
But no single descriptive statistic is always fit for all purposes, and no one is dictating to you that you must or should or ought to use the mean. If it doesn't suit your purposes, use something else.
As it happens you are quite wrong to write They are more like exponential. In this case average tells us nothing. For an exponential distribution with rate parameter lambda the mean is simply 1/lambda so the mean tells you everything about an exponential distribution.
I'm not an expert in statistics but i believe the average values are used so much because those are the values that help to measure the scalability of a system.
You need to consider first your average values to know how your system needs to bahevae under certains workloads and those needs to be predictable, you usually are not very interested in outliers at least not at first.
Of course you need to look into your min values and the peak values to know the moment your system its going to have a bottleneck but the average values show you as i said a correct and predictable behavior.

Estimating a project with many unknowns

I'm working on a project with many unknowns like moving the app from one platform to another.
My original estimations are way off and there is no way I can really know for sure when this will end.
How can i deal with the inability to estimate such a project. It's not that I'm adding a button to a screen or designing a web site, or creating and app or even fixing bugs. These are not methods with bugs, these are assumptions made in the overall code, which are not correct anymore and are found step by step and each analyzed and mitigated with many more unknowns.
I happened to write a master thesis about software-estimation and there are lessons I've learned:
-1st Count, 2nd compute, 3rd judge - this means: first try to identify items in your work which are countable e.g files, classes, LOCs, UIs, etc. Then calculate using this data the effort (in person/days). Use judgement as the last ressort.
-Document your estimation! Show numbers. This minimizes your risk, thus you will present results not as your opinion, but as more or less objective figures. (In general, the more paper the cleaner the backside)
-Estimation is not a commitment. Commitment is one number, estimation is always a range - so give your estimation as a range ( use cone of uncertainty to select the range properly http://www.construx.com/Page.aspx?hid=1648 )
-Devide: Use WBS, devide your work in small pieces and estimate them separately. The granulity depents on the entire length, but at most a working-package soultn't be bigger than 10% of entire effort.
-Estimate effort first, then schedule, then costs.
-Consider estimation as support for planing, reestimate on each project phase (s. cone of uncertainty).
I would suggest the book http://www.stevemcconnell.com/est.htm which deals all these points, in particular how to deal with bosses, who try to pull a commitment from you.
Regards,
Valentin Heinitz
There's no really right answer for coming up with an accurate estimation, because there's no way to know it.
as for estimating the work itself, think about how each step can be divided into separate sub-steps, and break those down even smaller, until you can get a fair picture of as much of the work as you can, with chunks small and discreet enough to give sound estimates for. If you can, come up with both an expected time and a worst-case time, to get a range of where you could land.
Another way to approach this is to ignore the old system. It sounds like a headache. Make an estimate of scraping the old system and implementing a new one from scratch, or integrating a 3rd party, off the shelf solution. If there's a case to be made for this, it is worth at least investigating it.
Sounds like a post for postsecret not SO. :)
I would tell him that it will be done when its done, and if thats not good enough, he can learn to program and help you. Then again, I think that you might get fired, but hey that sounds like it might be better.
Tell him more or less what you told us. The project is too volatile too give an accurate estimate and the best you can do is give an estimate for a given task. As long as the number of tasks is unknown so will be the estimate. If he is at all worth his salary he would rather hear this than some made up number. This is not uncommon when dealing with a large legacy code base.
It's not that I'm adding a button to a screen or designing a web site,
or creating and app or even fixing bugs.
That is a real problem. You can not estimate what you don't have experience in. The only thing you can do is pad your estimate until you think it is a reasonable amount of time. The more unknowns you think there are the more you pad. The less you know about it the more you pad.
I read the below book and it spoke at length about accuracy vs precision. Basically you can be accurate but have a very large range. For instance you can be certain the task will be between 1 day and 1 year to complete. That is not very precise but it is really accurate.
Software Estimation Demystifying...
Some tips for estimating

using software metrics for measuring productivity of pair programming

what are the software metrics that can be used to measuring the performance of pair programming ?
to be clear
is there any metrics used to measure pair programming specifically and does not use to measure the individual programmer ? what are the parameters used for measuring ?
for example:if we want to measure the cost for both individual and pair programming
let's assume that for the individual programming Cost = x so for the pair will be Cost = 2* x
right
and the same for the time for individual Time = t while for pair Time = 2* t
so if I would like to use Lines of code for measuring the product size , is there any different between individual and pair by using this metric?
any idea
Sorry to spoil your party, but lines of code is one of the worst metrics possible, especially if people know their assessment or bonus is in any way tied to the metric. It actively encourages cut and paste programming and other attrocities. It's more effort, but why don't you categorise the workload in terms of expected effort for one person, based on your historical data? Or, get some programmers to agree to do a few projects redundantly, rotating between pair-programming and individual, so you can see how the same programmers go at each. As one good programmer can be more productive than two average programmers (I vaguely remember an old IBM study concluding someone in the top percentile was 27x more productive than median), it's useful to see the same programmers doing it both ways. If objectively discovering the right process through such an experiment is too costly in terms of lost short-term productivity, then you're better off not bothering with the LOC metrics anyway... good programmers knowing their work arrangements are being based on such will probably be highly unimpressed.
Remember that there are also intangibles involved... pair programming - IMHO - forces people to keep focused, and to make design decisions that are more rounded and professional. Just the social contact can help relieve boredom, though it may stress some people too. My suspicion is that - whether or not it's faster to begin with - it makes for better, more maintainable results. It also ensures skill and knowledge transfer. You should factor in such intangible aspects as best you can - maybe doing interviews or anonymous surveys with the trial participants.
I guess what you try to ask, is how to measure efficiency of the team that uses pair programming. If yes, then answer is the measurement of efficiency doesn't depend on method or proccess of work team is using. You should try to evaluate the quality of their product releases, with metrics like number of issues identified post release. Probably the velocity.
and, please, don't use lines of code for efficiency measurement. It doesn't make sense. Lines of code is a measure of product size and not developer efficiency. It's like using height or weight to judge how smart you are. There is no correlation between amount of code and individual efficiency.
if you are interested in more software metrics, take a look at http://www.sdlcmetrics.org

How to detect anomalous resource consumption reliably?

This question is about a whole class of similar problems, but I'll ask it as a concrete example.
I have a server with a file system whose contents fluctuate. I need to monitor the available space on this file system to ensure that it doesn't fill up. For the sake of argument, let's suppose that if it fills up, the server goes down.
It doesn't really matter what it is -- it might, for example, be a queue of "work".
During "normal" operation, the available space varies within "normal" limits, but there may be pathologies:
Some other (possibly external)
component that adds work may run out
of control
Some component that removes work seizes up, but remains undetected
The statistical characteristics of the process are basically unknown.
What I'm looking for is an algorithm that takes, as input, timed periodic measurements of the available space (alternative suggestions for input are welcome), and produces as output, an alarm when things are "abnormal" and the file system is "likely to fill up". It is obviously important to avoid false negatives, but almost as important to avoid false positives, to avoid numbing the brain of the sysadmin who gets the alarm.
I appreciate that there are alternative solutions like throwing more storage space at the underlying problem, but I have actually experienced instances where 1000 times wasn't enough.
Algorithms which consider stored historical measurements are fine, although on-the-fly algorithms which minimise the amount of historic data are preferred.
I have accepted Frank's answer, and am now going back to the drawing-board to study his references in depth.
There are three cases, I think, of interest, not in order:
The "Harrods' Sale has just started" scenario: a peak of activity that at one-second resolution is "off the dial", but doesn't represent a real danger of resource depletion;
The "Global Warming" scenario: needing to plan for (relatively) stable growth; and
The "Google is sending me an unsolicited copy of The Index" scenario: this will deplete all my resources in relatively short order unless I do something to stop it.
It's the last one that's (I think) most interesting, and challenging, from a sysadmin's point of view..
If it is actually related to a queue of work, then queueing theory may be the best route to an answer.
For the general case you could perhaps attempt a (multiple?) linear regression on the historical data, to detect if there is a statistically significant rising trend in the resource usage that is likely to lead to problems if it continues (you may also be able to predict how long it must continue to lead to problems with this technique - just set a threshold for 'problem' and use the slope of the trend to determine how long it will take). You would have to play around with this and with the variables you collect though, to see if there is any statistically significant relationship that you can discover in the first place.
Although it covers a completely different topic (global warming), I've found tamino's blog (tamino.wordpress.com) to be a very good resource on statistical analysis of data that is full of knowns and unknowns. For example, see this post.
edit: as per my comment I think the problem is somewhat analogous to the GW problem. You have short term bursts of activity which average out to zero, and long term trends superimposed that you are interested in. Also there is probably more than one long term trend, and it changes from time to time. Tamino describes a technique which may be suitable for this, but unfortunately I cannot find the post I'm thinking of. It involves sliding regressions along the data (imagine multiple lines fitted to noisy data), and letting the data pick the inflection points. If you could do this then you could perhaps identify a significant change in the trend. Unfortunately it may only be identifiable after the fact, as you may need to accumulate a lot of data to get significance. But it might still be in time to head off resource depletion. At least it may give you a robust way to determine what kind of safety margin and resources in reserve you need in future.

How to get scientific results from non-experimental data (datamining?)

I want to obtain maximum performance out of a process with many variables, many of which cannot be controlled.
I cannot run thousands of experiments, so it'd be nice if I could run hundreds of experiments and
vary many controllable parameters
collect data on many parameters indicating performance
'correct,' as much as possible, for those parameters I couldn't control
Tease out the 'best' values for those things I can control, and start all over again
It feels like this would be called data mining, where you're going through tons of data which doesn't immediately appear to relate, but does show correlation after some effort.
So... Where do I start looking at algorithms, concepts, theory of this sort of thing? Even related terms for purposes of search would be useful.
Background: I like to do ultra-marathon cycling, and keep logs of each ride. I'd like to keep more data, and after hundreds of rides be able to pull out information about how I perform.
However, everything varies - routes, environment (temp, pres., hum., sun load, wind, precip., etc), fuel, attitude, weight, water load, etc, etc, etc. I can control a few things, but running the same route 20 times to test out a new fuel regime would just be depressing, and take years to perform all the experiments that I'd like to do. I can, however, record all these things and more(telemetry on bicycle FTW).
It sounds like you want to do some regression analysis. You certainly have plenty of data!
Regression analysis is an extremely common modeling technique in statistics and science. (It could be argued that statistics is the art and science of regression analysis.) There are many statistics packages out there to do the computation you'll need. (I'd recommend one, but I'm years out of date.)
Data mining has gotten a bad name because far too often people assume correlation equals causation. I found that a good technique is to start with variables you know have an influence and build a statistical model around them first. So you know that wind, weight and climb have an influence on how fast you can travel and statistical software can take your dataset and calculate what the correlation between those factors are. That will give you a statistical model or linear equation:
speed = x*weight + y*wind + z*climb + constant
When you explore new variables, you will be able to see if the model is improved or not by comparing a goodness of fit metric like R-squared. So you might check if temperature or time of day adds anything to the model.
You may want to apply a transformation to you data. For instance, you might find that you perform better on colder days. But really cold days and really hot days might hurt performance. In that case, you could assign temperatures to bins or segments: < 0°C; 0°C to 40°C; > 40°C, or some such. The key is to transform the data in a way that matches a rational model of what is going on in the real world, not just the data itself.
In case someone thinks this is not a programming related topic, notice that you can use these same techniques to analyze system performance.
With that many variables you have too many dimensions and you may want to look at Principal Component Analysis. It takes some of the "art" out of regression analysis and lets the data speak for itself. Some software to do that sort of analysis is shown at the bottom of the link.
I have used the Perl module Statistics::Regression for somewhat similar problems in the past. Be warned, however, that regression analysis is definitely an art. As the warning in the Perl module says, it won't make sense to you if you haven't learned the appropriate math.

Resources