I've implemented a simple autocorrelation routine against some audio samples at a rate of 44100.0 with a block size of 2048.
The general formula I am following looks like this:
r[k] = a[k] * b[k] = ∑ a[n] • b[n + k]
and I've implemented it in a brute-force nested loop as follows:
for k = 0 to N-1 do
for n = 0 to N-1 do
if (n+k) < N
then r[k] := r[k] + a(n)a(n+k)
else
break;
end for n;
end for k;
I look for the max magnitude in r and determine how many samples away it is and calculate the frequency.
To help temper the tuner's results, I am using a circular buffer and returning the median each time.
The brute force calculations are a bit slow - is there a well-known, faster way to do them?
Sometimes, the tuner just isn't quite as accurate as is needed. What type of heuristics can I apply here to help refine the results?
Sometimes the OCTAVE is incorrect - is there a way to hone in on the correct octave a bit more accurately?
The efficient way to do autocorrelation is with an FFT:
FFT the time domain signal
convert complex FFT output to magnitude and zero phase (i.e. power spectrum)
take inverse FFT
This works because autocorrelation in the time domain is equivalent to power spectrum in the frequency domain.
Having said that, bare bones autocorrelation is not a great way to implement (accurate) pitch detection in general, so you might want to have a rethink about your whole approach.
One simple way to improve this "brute force" autocorrelation method is to limit the range of k and only search for lags (or pitch periods) near the previous average period, say within +-0.5 semitones at first. If you don't find a correlation, then search a slightly wider range, say, a within a major third, then search a wider range but within the expected frequency range of the instrument being tuned.
You can get higher frequency resolution by using a higher sample rate (say, upsampling the data before the autocorrelation if necessary, and with proper filtering).
You will get autocorrelation peaks for the pitch lag (period) and for multiples of that lag. You will have to eliminate those subharmonics somehow (maybe as impossible for the instrument, or perhaps as an unlikely pitch jump from the previous frequency estimations.)
I don't fully understand the question, but I can point out one trick that you might be able to use. You say you look for the sample that is the max magnitude. If it is useful in the rest of your calculations, you can calculate that sample number to sub-sample precision.
Let's say you have a peak of 0.9 at sample 5 and neighboring samples of 0.1 and 0.8. The actual peak is probably somewhere between sample 5 and sample 6.
(0.1 * 4 + 0.9 * 5 + 0.8 * 6) / (0.1 + 0.9 + 0.8) = 5.39
Related
I would like to know how many FLOPS a Fast Fourier Transform (FFT) performs.
So, if I have a 1 dimensional array of N float numbers and I would like to calculate the FFT of this set of numbers, how many FLOPS need to be performed?
I know that this depends on the used algorithm, but what about the fastest available?
I also know that the scaling of a FFT is of the order of N*log(N) but this would not answer my question.
That depends on implementation. Fastest does not necessary mean lowest FLOP nor highest FLOPS. The speed is often achieved by exploiting HW architecture rather than lowering FLOP. There are too many implementations out there so your question without actual code and architecture is unanswerable.
I like precomputed W matrix implementations as I usually use FFT for single resolution matrices many times so no need to compute W more then once per resolution. That can cut down FLOP per recursion layer significantly.
For example this DFFTcc has 14 FLOP per iteration using only +,-,* operations. Assuming 1D FFT case N=8 and using basic data-type if I did not make any silly mistake:
FLOP = 8*14 + (4+4)*14 +(2+2+2+2+2)*14 +(1+1+1+1+1+1+1+1)*2 = 14*N*log2(N) + 2*N = 352
If you use Real input/output you can even lower that for first/last recursion layer. But simple FLOP count is not enough as some operations are more complicated then others. And also FLOP are not the only thing that affect speed.
Now to get the FLOPS just measure time [s] the FFT takes:
FLOPS = FLOP/time
As underlined by Spektre, the actual FLOPS (Floating Point OPerations per Second) depend on the particular hardware and implementation and higher FLOP (Floating Point OPeration) algorithms may correspond to lower FLOPS implementations, just because with such implementations you can more effectively exploit the hardware.
If you want to compute the number of floating point operations for a Decimation In Time radix-2 approach, the you can refer to the following figure:
Let N the length of the sequence to be transformed. There is an overall number of log2N stages and each stage contains N/2 butterflies. Let us then consider the generic butterfly:
Let us rewrite the output of the generic butterfly as
E(i + 1) = E(i) + W * O(i)
O(i + 1) = E(i) - W * O(i)
A butterfly thus involves one complex multiplication and two complex additions. On rewriting the above equations in terms of real and imaginary parts, we have
real(E(i + 1)) = real(E(i)) + (real(W) * real(O(i)) - imag(W) * imag(O(i)))
imag(E(i + 1)) = imag(E(i)) + (real(W) * imag(O(i)) + imag(W) * real(O(i)))
real(O(i + 1)) = real(O(i)) - (real(W) * real(O(i)) - imag(W) * imag(O(i)))
imag(O(i + 1)) = imag(O(i)) - (real(W) * imag(O(i)) + imag(W) * real(O(i)))
Accordingly, we have
4 multiplications
real(W) * real(O(i)),
imag(W) * imag(O(i)),
real(W) * imag(O(i)),
imag(W) * real(O(i)).
6 sums
real(W) * real(O(i)) – imag(W) * imag(O(i)) (1)
real(W) * imag(O(i)) + imag(W) * real(O(i)) (2)
real(E(i)) + eqn.1
imag(E(i)) + eqn.2
real(E(i)) – eqn.1
imag(E(i)) – eqn.2
Therefore, the number of operations for the Decimation In Time radix 2 approach are
2N * log2(N) multiplications
3N * log2(N) additions
These operation counts may change if the multiplications are differently arranged, see Complex numbers product using only three multiplications.
The same results apply to the case of Decimation in Frequency radix 2, see figure
You can estimate flops-performance at the FFTW benchmark page. Slightly outdated but contains results for the most effective FFT implementations.
It seems that rough estimate is about 5000 MFlops for 3.0 GHz Intel Xeon Core Duo
The "fastest available" is not only very processor dependent, but likely to use a completely different algorithm my test. But I counted the flops for a bog-simple non-recursive in-place decimation-in-time radix-2 FFT taken right out of an old ACM algorithms textbook for an FFT of length 1024, and got 20480 fmuls and 30720 fadds (this was using a pre-computed twiddle factor table, thus the transcendental function computations were not included in the flop counts). But note that this code additionally used a ton of integer array index calculations, sine table lookups, and data moves that probably took a lot more of the CPUs cycles than did the FPU. Much larger FFTs would likely also incur a large amount of additional data cache misses and other memory latency penalties. It's possible in that situation to speed up the code by adding more FLOPs in exchange for reduced memory hierarchy latency penalties. So, YMMV.
I come up with this
n=1;
curAvg = 0;
loop{
curAvg = curAvg + (newNum - curAvg)/n;
n++;
}
I think highlights of this way are:
- It avoids big numbers (and possible overflow if you would sum and then divide)
- you save one register (not need to store sum)
The trouble might be with summing error - but I assume that generally there shall be balanced numbers of round up and round down so the error shall not sum up dramatically.
Do you see any pitfalls in this solution?
Have you any better proposal?
Your solution is essentially the "standard" optimal online solution for keeping a running track of average without storing big sums and also while running "online", i.e. you can just process one number at a time without going back to other numbers, and you only use a constant amount of extra memory. If you want a slightly optimized solution in terms of numerical accuracy, at the cost of being "online", then assuming your numbers are all non-negative, then sort your numbers first from smallest to largest and then process them in that order, the same way you do now. That way, if you get a bunch of numbers that are really small about equal and then you get one big number, you will be able to compute the average accurately without underflow, as opposed to if you processed the large number first.
I have used this algorithm for many years.
The loop is any kind of loop. Maybe it is individual web sessions or maybe true loop. The point is all you need to track is the current count (N) and the current average (avg). Each time a new value is received, apply this algorithm to update the average. This will compute the exact arithmetic average. It has the additional benefit that it is resistant to overflow. If you have a gazillion large numbers to average, summing them all up may overflow before you get to divide by N. This algorithm avoids that pitfall.
Variables that are stored during the computation of the average:
N = 0
avg = 0
For each new value: V
N=N+1
a = 1/N
b = 1 - a
avg = a * V + b * avg
In log-linear model, we can find maximum entropy solution using IIS.
We update the parameters by finding a paramters which makes model expectation over a feature and empirical expectation matches.
However, there is a exp( sum of all features) in the equation.
My question is that when the number of feature is large (say 10000) then summation over all the features will blow easily. How can we solve this problem by numerical method ? To me it seems impossible since even compute exp(50) will blow.
Do computations in log-space, and use a logsumexp operation (borrowed from scikit-learn):
// Pseudocode for 1-d version of logsumexp:
// computes log(sum(exp(x) for x in a)) in a numerically stable way.
def logsumexp(a : array of float):
amax = maximum(a)
sum = 0.
for x in a:
sum += exp(x - amax)
return log(sum) + amax
This summation can be done once, before the start of the main loop, because the feature values don't change while you're optimizing.
Side remark: IIS is quite old-fashioned. Since some 10 years, almost everyone's been using L-BFGS-B, OWL-QN or (A)SGD to fit log-linear models.
I need to count the quantiles for a large set of data.
Let's assume we can get the data only through some portions (i.e. one row of a large matrix). To count the Q3 quantile one need to get all the portions of the data and store it somewhere, then sort it and count the quantile:
List<double> allData = new List<double>();
// This is only an example; the portions of data are not really rows of some matrix
foreach(var row in matrix)
{
allData.AddRange(row);
}
allData.Sort();
double p = 0.75 * allData.Count;
int idQ3 = (int)Math.Ceiling(p) - 1;
double Q3 = allData[idQ3];
I would like to find a way of obtaining the quantile without storing the data in an intermediate variable. The best solution would be to count some parameters of mid-results for first row and then adjust it step by step for next rows.
Note:
These datasets are really big (ca 5000 elements in each row)
The Q3 can be estimated, it doesn't have to be an exact value.
I call the portions of data "rows", but they can have different leghts! Usually it varies not so much (+/- few hundred samples) but it varies!
This question is similar to “On-line” (iterator) algorithms for estimating statistical median, mode, skewness, kurtosis, but I need to count quantiles.
ALso there are few articles in this topic, i.e.:
An Efficient Algorithm for the Approximate Median Selection Problem
Incremental quantile estimation for massive tracking
Before trying to implement these approaches, I wondered if there are maybe any other, quicker ways of counting the 0.25/0.75 quantiles?
I second the idea of using buckets. Don't limit yourself to 100 buckets - might as well use 1 million. The tricky part is to pick your bucket ranges so that everything doesn't end up in a single bucket. Probably the best way to estimate your bucket ranges is to take a reasonable random sample of your data, compute the 10% and 90% quantiles using the simple sort algorithm, then generate equal-sized buckets to fill that range. It isn't perfect, but if your data isn't from a super-weird distribution, it should work.
If you can't do random samples, you're in more trouble. You can pick an initial bucketing guess based on your expected data distribution, then while working through your data if any bucket (typically the first or last bucket) gets overfull, start over again with a new bucket range.
There is a more recent and much simpler algorithm for this that provides very good estimates of the extreme quantiles.
The basic idea is that smaller bins are used at the extremes in a way that both bounds the size of the data structure and guarantees higher accuracy for small or large q. The algorithm is available in several languages and many packages. The MergingDigest version requires no dynamic allocation ... once the MergingDigest is instantiated, no further heap allocation is required.
See https://github.com/tdunning/t-digest
Only retrieve the data you really need -- i.e., whatever value(s) is/are being used as the key for sorting, not everything else associated with it.
You can probably use Tony Hoare's Select algorithm to find your quantile more quickly than sorting all the data.
If your data has a Gaussian distribution, you can estimate the quantiles from the standard deviation. I assume your data isn't Gaussian distributed or you'd just be using the SD anyway.
If you can pass through your data twice, I'd do the following:
First pass, compute the max, min, SD and mean.
Second pass, divide the range [min,max] into some number of buckets (e.g. 100); do the same for (mean - 2*SD,mean + 2*SD) (with extra buckets for outliers). Then run through the data again, tossing numbers into these buckets.
Count buckets until you are at 25% and 75% of the data. If you want to get extra-fancy, you can interpolate between bucket values. (I.e. if you need 10% of a bucket to hit your 25th quantile, assume the value is 10% of the way from the low bound to the upper bound.)
This should give you a pretty good linear-time algorithm that works okay for most sets of not-entirely-perverse data.
Inspired by this answer I created a method that estimates the quantiles quite good. It is approximation close enough for my purposes.
The idea is following: the 0.75 quantile is in fact a median of all values that lies above the global median. And respectively, 0.25 quantile is a median of all values below the global median.
So if we can approximate the median, we can in similar way approximate the quantiles.
double median = 0;
double q1 = 0;
double q3 = 0;
double eta = 0.005;
foreach( var value in listOfValues) // or stream, or any other large set of data...
{
median += eta * Math.Sign(p.Int - median);
}
// Second pass. We know the median, so we can count the quantiles.
foreach(var value in listOfValues)
{
if(p.Int < median)
q1 += eta*Math.Sign(p.Int - q1);
else
q3 += eta*Math.Sign(p.Int - q3);
}
Remarks:
If distribution of your data is strange, you will need to have bigger eta in order to fit to the strange data. But the accuracy will be worse.
If the distribution is strange, but you know the total size of your collection (i.e. N) you can adjust the eta parameter in this way: at the beggining set the eta to be almost equal some large value (i.e. 0.2). As the loop passes, lower the value of eta so when you reach almost the end of the collection, the eta will be almost equal 0 (for example, in loop compute it like that: eta = 0.2 - 0.2*(i/N);
q-digest is an approximate online algorithm that lets you compute quantile: http://www.cs.virginia.edu/~son/cs851/papers/ucsb.sensys04.pdf
Here is an implementation:
https://github.com/airlift/airlift/blob/master/stats/src/main/java/io/airlift/stats/QuantileDigest.java
I'm trying to calculate the median of a set of values, but I don't want to store all the values as that could blow memory requirements. Is there a way of calculating or approximating the median without storing and sorting all the individual values?
Ideally I would like to write my code a bit like the following
var medianCalculator = new MedianCalculator();
foreach (var value in SourceData)
{
medianCalculator.Add(value);
}
Console.WriteLine("The median is: {0}", medianCalculator.Median);
All I need is the actual MedianCalculator code!
Update: Some people have asked if the values I'm trying to calculate the median for have known properties. The answer is yes. One value is in 0.5 increments from about -25 to -0.5. The other is also in 0.5 increments from -120 to -60. I guess this means I can use some form of histogram for each value.
Thanks
Nick
If the values are discrete and the number of distinct values isn't too high, you could just accumulate the number of times each value occurs in a histogram, then find the median from the histogram counts (just add up counts from the top and bottom of the histogram until you reach the middle). Or if they're continuous values, you could distribute them into bins - that wouldn't tell you the exact median but it would give you a range, and if you need to know more precisely you could iterate over the list again, examining only the elements in the central bin.
There is the 'remedian' statistic. It works by first setting up k arrays, each of length b. Data values are fed in to the first array and, when this is full, the median is calculated and stored in the first pos of the next array, after which the first array is re-used. When the second array is full the median of its values is stored in the first pos of the third array, etc. etc. You get the idea :)
It's simple and pretty robust. The reference is here...
http://web.ipac.caltech.edu/staff/fmasci/home/astro_refs/Remedian.pdf
Hope this helps
Michael
I use these incremental/recursive mean and median estimators, which both use constant storage:
mean += eta * (sample - mean)
median += eta * sgn(sample - median)
where eta is a small learning rate parameter (e.g. 0.001), and sgn() is the signum function which returns one of {-1, 0, 1}. (Use a constant eta if the data is non-stationary and you want to track changes over time; otherwise, for stationary sources you can use something like eta=1/n for the mean estimator, where n is the number of samples seen so far... unfortunately, this does not appear to work for the median estimator.)
This type of incremental mean estimator seems to be used all over the place, e.g. in unsupervised neural network learning rules, but the median version seems much less common, despite its benefits (robustness to outliers). It seems that the median version could be used as a replacement for the mean estimator in many applications.
Also, I modified the incremental median estimator to estimate arbitrary quantiles. In general, a quantile function tells you the value that divides the data into two fractions: p and 1-p. The following estimates this value incrementally:
quantile += eta * (sgn(sample - quantile) + 2.0 * p - 1.0)
The value p should be within [0,1]. This essentially shifts the sgn() function's symmetrical output {-1,0,1} to lean toward one side, partitioning the data samples into two unequally-sized bins (fractions p and 1-p of the data are less than/greater than the quantile estimate, respectively). Note that for p=0.5, this reduces to the median estimator.
I would love to see an incremental mode estimator of a similar form...
(Note: I also posted this to a similar topic here: "On-line" (iterator) algorithms for estimating statistical median, mode, skewness, kurtosis?)
Here is a crazy approach that you might try. This is a classical problem in streaming algorithms. The rules are
You have limited memory, say O(log n) where n is the number of items you want
You can look at each item once and make a decision then and there what to do with it, if you store it, it costs memory, if you throw it away it is gone forever.
The idea for the finding a median is simple. Sample O(1 / a^2 * log(1 / p)) * log(n) elements from the list at random, you can do this via reservoir sampling (see a previous question). Now simply return the median from your sampled elements, using a classical method.
The guarantee is that the index of the item returned will be (1 +/- a) / 2 with probability at least 1-p. So there is a probability p of failing, you can choose it by sampling more elements. And it wont return the median or guarantee that the value of the item returned is anywhere close to the median, just that when you sort the list the item returned will be close to the half of the list.
This algorithm uses O(log n) additional space and runs in Linear time.
This is tricky to get right in general, especially to handle degenerate series that are already sorted, or have a bunch of values at the "start" of the list but the end of the list has values in a different range.
The basic idea of making a histogram is most promising. This lets you accumulate distribution information and answer queries (like median) from it. The median will be approximate since you obviously don't store all values. The storage space is fixed so it will work with whatever length sequence you have.
But you can't just build a histogram from say the first 100 values and use that histogram continually.. the changing data may make that histogram invalid. So you need a dynamic histogram that can change its range and bins on the fly.
Make a structure which has N bins. You'll store the X value of each slot transition (N+1 values total) as well as the population of the bin.
Stream in your data. Record the first N+1 values. If the stream ends before this, great, you have all the values loaded and you can find the exact median and return it. Else use the values to define your first histogram. Just sort the values and use those as bin definitions, each bin having a population of 1. It's OK to have dupes (0 width bins).
Now stream in new values. For each one, binary search to find the bin it belongs to.
In the common case, you just increment the population of that bin and continue.
If your sample is beyond the histogram's edges (highest or lowest), just extend the end bin's range to include it.
When your stream is done, you find the median sample value by finding the bin which has equal population on both sides of it, and linearly interpolating the remaining bin-width.
But that's not enough.. you still need to ADAPT the histogram to the data as it's being streamed in. When a bin gets over-full, you're losing information about that bin's sub distribution.
You can fix this by adapting based on some heuristic... The easiest and most robust one is if a bin reaches some certain threshold population (something like 10*v/N where v=# of values seen so far in the stream, and N is the number of bins), you SPLIT that overfull bin. Add a new value at the midpoint of the bin, give each side half of the original bin's population. But now you have too many bins, so you need to DELETE a bin. A good heuristic for that is to find the bin with the smallest product of population and width. Delete it and merge it with its left or right neighbor (whichever one of the neighbors itself has the smallest product of width and population.). Done!
Note that merging or splitting bins loses information, but that's unavoidable.. you only have fixed storage.
This algorithm is nice in that it will deal with all types of input streams and give good results. If you have the luxury of choosing sample order, a random sample is best, since that minimizes splits and merges.
The algorithm also allows you to query any percentile, not just median, since you have a complete distribution estimate.
I use this method in my own code in many places, mostly for debugging logs.. where some stats that you're recording have unknown distribution. With this algorithm you don't need to guess ahead of time.
The downside is the unequal bin widths means you have to do a binary search for each sample, so your net algorithm is O(NlogN).
David's suggestion seems like the most sensible approach for approximating the median.
A running mean for the same problem is a much easier to calculate:
Mn = Mn-1 + ((Vn - Mn-1) / n)
Where Mn is the mean of n values, Mn-1 is the previous mean, and Vn is the new value.
In other words, the new mean is the existing mean plus the difference between the new value and the mean, divided by the number of values.
In code this would look something like:
new_mean = prev_mean + ((value - prev_mean) / count)
though obviously you may want to consider language-specific stuff like floating-point rounding errors etc.
I don't think it is possible to do without having the list in memory. You can obviously approximate with
average if you know that the data is symmetrically distributed
or calculate a proper median of a small subset of data (that fits in memory) - if you know that your data has the same distribution across the sample (e.g. that the first item has the same distribution as the last one)
Find Min and Max of the list containing N items through linear search and name them as HighValue and LowValue
Let MedianIndex = (N+1)/2
1st Order Binary Search:
Repeat the following 4 steps until LowValue < HighValue.
Get MedianValue approximately = ( HighValue + LowValue ) / 2
Get NumberOfItemsWhichAreLessThanorEqualToMedianValue = K
is K = MedianIndex, then return MedianValue
is K > MedianIndex ? then HighValue = MedianValue Else LowValue = MedianValue
It will be faster without consuming memory
2nd Order Binary Search:
LowIndex=1
HighIndex=N
Repeat Following 5 Steps until (LowIndex < HighIndex)
Get Approximate DistrbutionPerUnit=(HighValue-LowValue)/(HighIndex-LowIndex)
Get Approximate MedianValue = LowValue + (MedianIndex-LowIndex) * DistributionPerUnit
Get NumberOfItemsWhichAreLessThanorEqualToMedianValue = K
is (K=MedianIndex) ? return MedianValue
is (K > MedianIndex) ? then HighIndex=K and HighValue=MedianValue Else LowIndex=K and LowValue=MedianValue
It will be faster than 1st order without consuming memory
We can also think of fitting HighValue, LowValue and MedianValue with HighIndex, LowIndex and MedianIndex to a Parabola, and can get ThirdOrder Binary Search which will be faster than 2nd order without consuming memory and so on...
Usually if the input is within a certain range, say 1 to 1 million, it's easy to create an array of counts: read the code for "quantile" and "ibucket" here: http://code.google.com/p/ea-utils/source/browse/trunk/clipper/sam-stats.cpp
This solution can be generalized as an approximation by coercing the input into an integer within some range using a function that you then reverse on the way out: IE: foo.push((int) input/1000000) and quantile(foo)*1000000.
If your input is an arbitrary double precision number, then you've got to autoscale your histogram as values come in that are out of range (see above).
Or you can use the median-triplets method described in this paper: http://web.cs.wpi.edu/~hofri/medsel.pdf
I picked up the idea of iterative quantile calculation. It is important to have a good value for starting point and eta, these may come from mean and sigma. So I programmed this:
Function QuantileIterative(Var x : Array of Double; n : Integer; p, mean, sigma : Double) : Double;
Var eta, quantile,q1, dq : Double;
i : Integer;
Begin
quantile:= mean + 1.25*sigma*(p-0.5);
q1:=quantile;
eta:=0.2*sigma/xy(1+n,0.75); // should not be too large! sets accuracy
For i:=1 to n Do
quantile := quantile + eta * (signum_smooth(x[i] - quantile,eta) + 2*p - 1);
dq:=abs(q1-quantile);
If dq>eta
then Begin
If dq<3*eta then eta:=eta/4;
For i:=1 to n Do
quantile := quantile + eta * (signum_smooth(x[i] - quantile,eta) + 2*p - 1);
end;
QuantileIterative:=quantile
end;
As the median for two elements would be the mean, I used a smoothed signum function, and xy() is x^y. Are there ideas to make it better? Of course if we have some more a-priori knowledge we can add code using min and max of the array, skew, etc. For big data you would not use an array perhaps, but for testing it is easier.
On homogeneous random ordered and for big enough list, this pseudo code can work:
# find min on the fly
if minDataPoint > dataPoint:
minDataPoint = dataPoint
# find max on the fly
if maxDataPoint < dataPoint:
maxDataPoint = dataPoint
# estimate median base on the current data
estimate_mid = (maxDataPoint + minDataPoint) / 2
#if **new** dataPoint is closer to the mid? stor it
if abs(midDataPoint - estimate_mid) > abs(dataPoint - estimate_mid):
midDataPoint = dataPoint
Inspired by #lakshmanaraj