What is the importance of average in performance metrics? - performance

As part of the performance tuning and load tests we usually do, i am forced to believe that we need to look at 90th percentiles. As per my understanding 90 times out of hundred people got a respone which is equal to or better than the 90th percentile number. However my current clients always look at average number.What is the impact of only looking at average? Most of the times, I see that between two tests, if average is lower in test A , then 90th percentile is also less in test A.
So should we match the SLA on average or on 90th percentile?

I agree that this is not a pure programming question. But in my humble opinion, program performance and statistics are closely related anyway. That's why I think this question deserves an answer.
The two are different in nature. We then have the average - the sum of all observations divided by the number of observations. We also have the median or 50 percentile - half of the observations are above, half are below.
There's a very visible difference in the two if your observations do not match the bell curve: e.g. if you have positive outliers but no negative outliers.
Let's do a few number examples:
observations 2 4 6 8 - average and median are both 5
observations 1 1 10 - average is 4, median is 1.
Your 50 percentile could be argued on any number between 1 and 10 here: two observations are below, one is above for any of these numbers.
observations 1 4 1000 - average is 335, median is 4, 50 percentile is also 4.
As you can see, the distribution of the numbers matters a lot.
Only if you have a symmetrical distribution (like a Gaussian bell curve), the average equals the 50 percentile.
But you asked for the 90 percentile.
Essentially, nothing changes - the distribution, the number of outliers and the most often observed values affect your percentile.
I suggest to pick up a good book on statistics if you need to know more.

Related

Is there a way to calculate the coefficient of the Correlation of binary variables between a and b?

So there are two variables
a -- Who is greater than 40 year old (BINARY 0 or 1)
b -- If they have a Luxury Car (Binary 0 or 1)
Now they have the data sum values.
Total sample size -- 500
Total number of people above 40 are -- 60
Total number of people have luxury car -- 40
Total number of people have luxury car and above 40 -- 10
NOTE: Draw a venn diagram if that helps
Compute the coefficient of the correlation between a and b?
A correlation function can handle binary values. Even categorical or enumerated items, under the hood, the computer is giving random numerical assignments and testing for correlation of interaction. For your case you are simply wanting to know how often the two are the same versus how often they are opposite. If they are always opposite each other you would see a negative 1. Always the same positive 1. Zero would mean no correlation.

Probability - expectation Puzzle: 1000 persons and a door

You stand in an office by a door, with a measuring tape. Every time a person walks in you measure him or her and only keep tally of the “record” tallest. If the new person is taller than the preceding one, you count a record. If later another person is taller, you have another record, etc.
A 1000 persons pass through the door. How many records do you expect to have?
(Assume independence of height/arrival. Also note that the answer does not depend on any assumption about the probability distribution other than independence.)
PS - I'm able to come up with answer (~7.5) with a brute force approach. ( Running this scenario over 1000000 times and taking average ). But here I'm looking for a theoretical approach.
consider x_1 to x_1000 as the record, and max(i) as max of the sequence until i. The question is reduced to finding expected number of times the max(i) changes.
for i=0 to 999:
if x_i+1>max(i), then max(i) changes
Also, P(x_i+1>max(i))=1/i+1
answer=> summation of 1/1+i (i varies from 0 to 999) which is approx. 7.49

Calculating total over timespan with arbitrary datapoints

If I am given the total number of occurrences of an event over the last hour, and I can get this data at arbitrary times ( but at least once an hour ), how can I work out the total number of occurrences over a 24 hour period?
Obviously, you can't. For example -- if the first two observations overlap then it would be impossible to determine the number of kills during the overlap. If there is a time gap between the first two observations then there is no way to determine what happened during the gap. You could try to set up a system of equations -- but the resulting system will be underdetermined (but it could give you both a min and a max, which might be relevant).
Why not adopt a statistical approach? Let X = kills over a 1 hour period. This is a random variable. Estimate its expected value by sampling it at randomly chosen times and multiply your estimate by 24.

Finding the constant c in the time complexity of certain algorithms

I need help finding and approximating the constant c in the complexity of insertion sort (cn^2) and merge sort (cnlgn) by inspecting the results of their running times.
A bit of background, my purpose was to "implement insertion sort and merge sort (decreasing order) algorithms and measure the performance of these two algorithms. For each algorithm, and for each n = 100, 200, 300, 400, 500, 1000, 2000, 4000, measure its running time when the input is
already sorted, i.e. n, n-1, …, 3, 2,1;
reversely sorted 1, 2, 3, … n;
random permutation of 1, 2, …, n.
The running time should exclude the time for initialization."
I have done the code for both algorithms and put the measurements (microseconds) in a spreadsheet. Now, I'm not sure how to find this c due to differing values for each condition of each algorithm.
For reference, the time table:
InsertionSort MergeSort
n AS RS Random AS RS Random
100 12 419 231 192 191 211
200 13 2559 1398 1303 1299 1263
300 20 236 94 113 113 123
400 25 436 293 536 641 556
500 32 504 246 91 81 105
1000 65 1991 995 169 246 214
2000 9 8186 4003 361 370 454
4000 17 31777 15797 774 751 952
I can provide the code if necessary.
It's hardly possible to determine values of these constants, especially for modern processors that uses caches, pipelines, and other "performance things".
Of course, you can try to find an approximation, and then you'll need Excel or any other spreadsheet.
Enter your data, create chart, and then add trendline. The spreadsheet calculates the values of constants for you.
First to understand is, that complexity and running times are not the same and maybe does not have very much to do with each other.
The complexity is a theoretical measurement to get an idea of how an algorithm slow down on bigger inputs compared to smaller inputs or compared to other algorithms.
The running time depends on the exact implementation, the computer it is running on, the other programms that run on the same computer and many other things. You will also notice, that the running time will slow down if the input is to big for your cache, and jump an other time if its also to big for your RAM. As you can see for n = 200 you got some weird running times. This will not help you finding the constants.
In cases where you don't have the code, you have no other choise to use the running times to approximat the complexity. Then you should use only big inputs (1000 should be the smallest input in your case). If your algorithm is deterministic, just input the worst case. Random cases can be good and bad, and so you never get anything about the real complexity. An other problem is, that the complexity measures "operations", so evaluating and if-statement or incrementing a variable is the same, but in running time an if needs more time than an incrementing something.
So what you can do is to plot your complexity and the values you measured and look for a factor that holds...
E.g. This is a plot of n² skaled by 1/500 and the points from your chart.
First some notes:
you have very small n
The algorithm complexity start corresponding to runtime only if n is big enough. For n=4000 is ~4KB of data which can still fit into most of CPU CACHE's so increasing to at least n=1000000 can and will change the relation between runtime and n considerably !
Runtime measurement
for random data you need the average runtime measurement not single one so for any n do at least 5 measurements each with different dataset and use average time from all
Now how to obtain c
If program has complexity O(n^2) it means that for big enough n the runtime is:
t(n)=c*n^2
so take few measurements. I choose last 3 from your insert sort, reverse sorted because that should match the worst case O(n^2) complexity if I am not mistaken so:
c*n^2 =t(n)
c*1000^2= 1.991
c*2000^2= 8.186
c*4000^2=31.777
solve the equations:
c=t(n)/(n^2)
c= 1.991/ 1000000=1.991 us
c= 8.186/ 4000000=2.0465 us
c=31.777/16000000=1.9860625 us
If everything is alright then the c for different n should be relatively the same. In your case it is around 2 us per element but as I mentioned above with increasing n this will change due to CACHE usage. Also if any dynamic container is used then you have to include complexity of its usage to the algorithm which can be sometimes significant !!!
Take the case of 4000 elements and divide the time by the respective complexity estimate, 4000² or 4000 Lg 4000.
This is not worse than any other method.
For safety, you should check anyway that the last values align on a relatively smooth curve, so that the value for 4000 is representative.
As others commented, this is rather poor methodology. You should also consider the standard deviation of the running times, or even better, the histogram of running times, and cover a larger range of sizes.
On another hand, getting accurate values is not so important as knowing the values of the constants is not helpful to compare the two algorithms.

random number generator test

How will you test if the random number generator is generating actual random numbers?
My Approach: Firstly build a hash of size M, where M is the prime number. Then take the number
generated by random number generator, and take mod with M.
and see it fills in all the hash or just in some part.
That's my approach. Can we prove it with visualization?
Since I have very less knowledge about testing. Can you suggest me a thorough approach of this question? Thanks in advance
You should be aware that you cannot guarantee the random number generator is working properly. Note that even a perfect uniform distribution in range [1,10] - there is a 10-10 chance of getting 10 times 10 in a random sampling of 10 numbers.
Is it likely? Of course not.
So - what can we do?
We can statistically prove that the combination (10,10,....,10) is unlikely if the random number generator is indeed uniformly distributed. This concept is called Hypothesis testing. With this approach we can say "with certainty level of x% - we can reject the hypothesis that the data is taken from a uniform distribution".
A common way to do it, is using Pearson's Chi-Squared test, The idea is similar to yours - you fill in a table - check what is the observed (generated) number of numbers for each cell, and what is the expected number of numbers for each cell under the null hypothesis (in your case, the expected is k/M - where M is the range's size, and k is the total number of numbers taken).
You then do some manipulation on the data (see the wikipedia article for more info what this manipulation is exactly) - and get a number (the test statistic). You then check if this number is likely to be taken from a Chi-Square Distribution. If it is - you cannot reject the null hypothesis, if it is not - you can be certain with x% certainty that the data is not taken from a uniform random generator.
EDIT: example:
You have a cube, and you want to check if it is "fair" (uniformly distributed in [1,6]). Throw it 200 times (for example) and create the following table:
number: 1 2 3 4 5 6
empirical occurances: 37 41 30 27 32 33
expected occurances: 33.3 33.3 33.3 33.3 33.3 33.3
Now, according to Pearson's test, the statistic is:
X = ((37-33.3)^2)/33.3 + ((41-33.3)^2)/33.3 + ... + ((33-33.3)^2)/33.3
X = (18.49 + 59.29 + 10.89 + 39.69 + 1.69 + 0.09) / 33.3
X = 3.9
For a random C~ChiSquare(5), the probability of being higher then 3.9 is ~0.45 (which is not improbable)1.
So we cannot reject the null hypothesis, and we can conclude that the data is probably uniformly distributed in [1,6]
(1) We usually reject the null hypothesis if the value is smaller then 0.05, but this is very case dependent.
My naive idea:
The generator is following a distribution. (At least it should.) Do a reasonable amount of runs then plot the values on a graph. Fit a regression curve on the points. If it correlates with the shape of the distribution you're good. (This is also possible in 1D with projections and histograms. And fully automatable with the correct tool, e.g. MatLab)
You can also use the diehard tests as it was mentioned before, that is surely better but involves much less intuition, at least on your side.
Let's say you want to generate a uniform distribution on the interval [0, 1].
Then one possible test is
for i from 1 to sample-size
when a < random-being-tested() < b
counter +1
return counter/sample-size
And see if the result is closed to b-a (b minus a).
Of course you should define a function taking a, b between 0 and 1 as inputs, and return the difference between the counter/sample-size and b-a. Loop through possible a, b, say of the multiples of 0.01, a < b. Print out a, b when the difference is larger than a preset epsilon, say 0.001.
Those are the a, b for which there are too many outliers.
If you let sample-size be 5000. Your random-being-tested will be called about 5000 * 5050 times in total, hopefully not too bad.
I had the same problem.
when I finish to write my code (using an external RNG engine)
I looked on the results and found that all of them fail Chi-Square test whenever I have to many results.
my code generated a random number and hold buckets of the amount of each result range.
I don't know why the Chi-square test fail when i have a lot of results.
during my research I saw that the C# Random.next() fail in any range of random and that some of the numbers have better odds than the other, further more i saw that the RNGCryptoServiceProvider random provider is not supporting good on big numbers.
when trying to get numbers in the range of 0-1,000,000,000 the numbers in the lower range 0-300M have better odds to appear...
as a result I'm using the RNGCryptoServiceProvider and if my range is higher than 100M i'm combine the number my self (RandomHigh*100M + RandomLow) and the ranges of both randoms is smaller than 100M so it good.
Good Luck!

Resources