equal power crossfade in Audio Unit? - macos

This is actually more of a theoretical question, but here it goes:
I'm developing an effect audio unit and it needs an equal power crossfade between dry and wet signals.
But I'm confused about the right way to do the mapping function from the linear fader to the scaling factor (gain) for the signal amplitudes of dry and wet streams.
Basically, I'ev seen it done with cos / sin functions or square roots... essentially approximating logarithmic curves. But if our perception of amplitude is logarithmic to start with, shouldn't these curves mapping the fader position to an amplitude actually be exponential?
This is what I mean:
Assumptions:
signal[i] means the ith sample in a signal.
each sample is a float ranging [-1, 1] for amplitudes between [0,1].
our GUI control is an NSSlider ranging from [0,1], so it is in
principle linear.
fader is a variable with the value of the NSSlider.
First Observation:
We perceive amplitude in a logarithmic way. So if we have a linear fader and merely adjust a signal's amplitude by doing: signal[i] * fader what we are perceiving (hearing, regardless of the math) is something along the lines of:
This is the so-called crappy fader-effect: we go from silence to a drastic volume increase across the leftmost segment in the slider and past the middle the volume doesn't seem to get that louder.
So to do the fader "right", we instead either express it in a dB scale and then, as far as the signal is concerned, do: signal[i] * 10^(fader/20) or, if we were to keep or fader units in [0,1], we can do :signal[i] * (.001*10^(3*fader))
Either way, our new mapping from the NSSlider to the fader variable which we'll use for multiplying in our code, looks like this now:
Which is what we actually want, because since we perceive amplitude logarithmically, we are essentially mapping from linear (NSSLider range 0-1) to exponential and feeding this exponential output to our logarithmic perception. And it turns out that : log(10^x)=x so we end up perceiving the amplitude change in a linear (aka correct) way.
Great.
Now, my thought is that an equal-power crossfade between two signals (in this case a dry / wet horizontal NSSlider to mix together the input to the AU and the processed output from it) is essentially the same only that with one slider acting on both hypothetical signals dry[i] and wet[i].
So If my slider ranges from 0 to 100 and dry is full-left and wet is full-right), I'd end up with code along the lines of:
Float32 outputSample, wetSample, drySample = <assume proper initialization>
Float32 mixLevel = .01 * GetParameter(kParameterTypeMixLevel);
Float32 wetPowerLevel = .001 * pow(10, (mixLevel*3));
Float32 dryPowerLevel = .001 * pow(10, ((-3*mixLevel)+1));
outputSample = (wetSample * wetPowerLevel) + (drySample * dryPowerLevel);
The graph of which would be:
And same as before, because we perceive amplitude logarithmically, this exponential mapping should actually make it where we hear the crossfade as linear.
However, I've seen implementations of the crossfade using approximations to log curves. Meaning, instead:
But wouldn't these curves actually emphasize our logarithmic perception of amplitude?

The "equal power" crossfade you're thinking of has to do with keeping the total output power of your mix constant as you fade from wet to dry. Keeping total power constant serves as a reasonable approximation to keeping total perceived loudness constant (which in reality can be fairly complicated).
If you are crossfading between two uncorrelated signals of equal power, you can maintain a constant output power during the crossfade by using any two functions whose squared values sum to 1. A common example of this is the set of functions
g1(k) = ( 0.5 + 0.5*cos(pi*k) )^.5
g2(k) = ( 0.5 - 0.5*cos(pi*k) )^.5,
where 0 <= k <= 1 (note that g1(k)^2 + g2(k)^2 = 1 is satisfied, as mentioned). Here's a proof that this results in a constant power crossfade for uncorrelated signals:
Say we have two signals x1(t) and x2(t) with equal powers E[ x1(t)^2 ] = E[ x2(t)^2 ] = Px, which are also uncorrelated ( E[ x1(t)*x2(t) ] = 0 ). Note that any set of gain functions satisfying the previous condition will have that g2(k) = (1 - g1(k)^2)^.5. Now, forming the sum y(t) = g1(k)*x1(t) + g2(k)*x2(t), we have that:
E[ y(t)^2 ] = E[ (g1(k) * x1(t))^2 + 2*g1(k)*(1 - g1(k)^2)^.5 * x1(t) * x2(t) + (1 - g1(k)^2) * x2(t)^2 ]
= g1(k)^2 * E[ x1(t)^2 ] + 2*g1(k)*(1 - g1(k)^2)^.5 * E[ x1(t)*x2(t) ] + (1 - g1(k)^2) * E[ x2(t)^2 ]
= g1(k)^2 * Px + 0 + (1 - g1(k)^2) * Px = Px,
where we have used that g1(k) and g2(k) are deterministic and can thus be pulled outside the expectation operator E[ ], and that E[ x1(t)*x2(t) ] = 0 by definition because x1(t) and x2(t) are assumed to be uncorrelated. This means that no matter where we are in the crossfade (whatever k we choose) our output will still have the same power, Px, and thus hopefully equal perceived loudness.
Note that for completely correlated signals, you can achieve constant output power by doing a "linear" fade - using and two functions that sum to one ( g1(k) + g2(k) = 1 ). When mixing signals that are somewhat correlated, gain functions between those two would theoretically be appropriate.
What you're thinking of when you say
And same as before, because we perceive amplitude logarithmically,
this exponential mapping should actually make it where we hear the
crossfade as linear.
is that one signal should perceptually decrease in loudness as a linear function of slider position (k), while the other signal should perceptually increase in loudness as a linear function of slider position, when applying your derived crossfade. While your derivation of that seems pretty spot on, unfortunately that may not the best way to blend your dry and wet signals in terms of consistency - often, maintaining equal output loudness, regardless of slider position, is the better thing to shoot for. In any case, it might be worth trying a couple different functions to see what is most usable and consistent.

Related

How to calculate the length of cable on a winch given the rotations of the drum

I have a cable winch system that I would like to know how much cable is left given the number of rotations that have occurred and vice versa. This system will run on a low-cost microcontroller with low computational resources and should be able to update quickly, long for/while loop iterations are not ideal.
The inputs are cable diameter, inner drum diameter, inner drum width, and drum rotations. The output should be the length of the cable on the drum.
At first, I was calculating the maximum number of wraps of cable per layer based on cable diameter and inner drum width, I could then use this to calculate the length of cable per layer. The issue comes when I calculate the total length as I have to loop through each layer, a costly operation (could be 100's of layers).
My next approach was to precalculate a table with each layer, then perform a 3-5 degree polynomial regression down to an easy to calculate formula.
This appears to work for the most part, however, there are slight inaccuracies at the low and high end (0 rotations could be + or - a few units of cable length). The real issue comes when I try and reverse the function to get the current rotations of the drum given the length. So far, my reversed formula does not seem to equal the forward formula (I am reversing X and Y before calculating the polynomial).
I have looked high and low and cannot seem to find any formulas for cable length to rotations that do not use recursion or loops. I can't figure out how to reverse my polynomial function to get the reverse value without losing precision. If anyone happens to have an insight/ideas or can help guide me in the right direction that would be most helpful. Please see my attempts below.
// Units are not important
CableLength = 15000
CableDiameter = 5
DrumWidth = 50
DrumDiameter = 5
CurrentRotations = 0
CurrentLength = 0
CurrentLayer = 0
PolyRotations = Array
PolyLengths = Array
PolyLayers = Array
WrapsPerLayer = DrumWidth / CableDiameter
While CurrentLength < CableLength // Calcuate layer length for each layer up to cable length
CableStackHeight = CableDiameter * CurrentLayer
DrumDiameterAtLayer = DrumDiameter + (CableStackHeight * 2) // Assumes cables stack vertically
WrapDiameter = DrumDiameterAtLayer + CableDiameter // Center point of cable
WrapLength = WrapDiameter * PI
LayerLength = WrapLength * WrapsPerLayer
CurrentRotations += WrapsPerLayer // 1 Rotation per wrap
CurrentLength += LayerLength
CurrentLayer++
PolyRotations.Push(CurrentRotations)
PolyLengths.Push(CurrentLength)
PolyLayers.Push(CurrentLayer)
End
// Using 5 degree polynomials, any lower = very low precision
PolyLengthToRotation = CreatePolynomial(PolyLengths, PolyRotations, 5) // 5 Degrees
PolyRotationToLength = CreatePolynomial(PolyRotations, PolyLengths, 5) // 5 Degrees
// 40 Rotations should equal about 3141.593 units
RealRotation = 40
RealLength = 3141.593
CalculatedLength = EvaluatePolynomial(RealRotation,PolyRotationToLength)
CalculatedRotations = EvaluatePolynomial(RealLength,PolyLengthToRotation)
// CalculatedLength = 3141.593 // Good
// CalculatedRotations = 41.069 // No good
// CalculatedRotations != RealRotation // These should equal
// 0 Rotations should equal 0 length
RealRotation = 0
RealLength = 0
CalculatedLength = EvaluatePolynomial(RealRotation,PolyRotationToLength)
CalculatedRotations = EvaluatePolynomial(RealLength,PolyLengthToRotation)
// CalculatedLength = 1.172421e-9 // Very close
// CalculatedRotations = 1.947, // No good
// CalculatedRotations != RealRotation // These should equal
Side note: I have a "spool factor" parameter to calibrate for the actual cable spooling efficiency that is not shown here. (cable is not guaranteed to lay mathematically perfect)
#Bathsheba May have meant cable, but a table is a valid option (also experimental numbers are probably more interesting in the real world).
A bit slow, but you could always do it manually. There's only 40 rotations (though optionally for better experimental results, repeat 3 times and take the average...). Reel it completely in. Then do a rotation (depending on the diameter of your drum, half rotation). Measure and mark how far it spooled out (tape), record it. Repeat for the next 39 rotations. You now have a lookup table you can find the length in O(log N) via binary search (by sorting the data) and a bit of interpolation (IE: 1.5 rotations is about half way between 1 and 2 rotations).
You can also use this to derived your own experimental data. Do the same thing, but with a cable half as thin (perhaps proportional to the ratio of the inner diameter and the cable radius?). What effect does it have on the numbers? How about twice or half the diameter? Math says circumference is linear (2πr), so half the radius, half the amount per rotation. Might be easier to adjust the table data.
The gist is that it may be easier for you to have a real world reference for your numbers rather than relying purely on an abstract mathematically model (not to say the model would be wrong, but cables don't exactly always wind up perfectly, who knows perhaps you can find a quirk about your winch that would have lead to errors in a pure mathematical approach). Who knows might be able to derive the formula yourself :) with a fudge factor for the real world even lol.

Understanding Support Vector Regression (SVR) [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 1 year ago.
Improve this question
I'm working with SVR, and using this resource. Erverything is super clear, with epsilon intensive loss function (from figure). Prediction comes with tube, to cover most training sample, and generalize bounds, using support vectors.
Then we have this explanation. This can be described by introducing (non-negative) slack variables , to measure the deviation of training samples outside -insensitive zone. I understand this error, outside tube, but don't know, how we can use this in optimization. Could somebody explain this?
In local source. I'm trying to achieve very simple optimization solution, without libraries. This what I have for loss function.
import numpy as np
# Kernel func, linear by default
def hypothesis(x, weight, k=None):
k = k if k else lambda z : z
k_x = np.vectorize(k)(x)
return np.dot(k_x, np.transpose(weight))
.......
import math
def boundary_loss(x, y, weight, epsilon):
prediction = hypothesis(x, weight)
scatter = np.absolute(
np.transpose(y) - prediction)
bound = lambda z: z \
if z >= epsilon else 0
return np.sum(np.vectorize(bound)(scatter))
First, let's look at the objective function. The first term, 1/2 * w^2 (wish this site had LaTeX support but this will suffice) correlates with the margin of the SVM. The article you linked doesn't, in my opinion, explain this very well and calls this term describing "the model's complexity", but perhaps this is not the best way of explaining it. Minimizing this term maximizes the margin (while still representing the data well), which is the predominant goal of using SVM's doing regression.
Warning, Math Heavy Explanation: The reason this is the case is that when maximizing the margin, you want to find the "farthest" non-outlier points right on the margin and minimize its distance. Let this farthest point be x_n. We want to find its Euclidean distance d from the plane f(w, x) = 0, which I will rewrite as w^T * x + b = 0 (where w^T is just the transpose of the weights matrix so that we can multiply the two). To find the distance, let us first normalize the plane such that |w^T * x_n + b| = epsilon, which we can do WLOG as w is still able to form all possible planes of the form w^T * x + b= 0. Then, let's note that w is perpendicular to the plane. This is obvious if you have dealt a lot with planes (particularly in vector calculus), but can be proven by choosing two points on the plane x_1 and x_2, then noticing that w^T * x_1 + b = 0, and w^T * x_2 + b = 0. Subtracting the two equations we get w^T(x_1 - x_2) = 0. Since x_1 - x_2 is just any vector strictly on the plane, and its dot product with w is 0, then we know that w is perpendicular to the plane. Finally, to actually calculate the distance between x_n and the plane, we take the vector formed by x_n' and some point on the plane x' (The vectors would then be x_n - x', and projecting it onto the vector w. Doing this, we get d = |w * (x_n - x') / |w||, which we can rewrite as d = (1 / |w|) * | w^T * x_n - w^T x'|, and then add and subtract b to the inside to get d = (1 / |w|) * | w^T * x_n + b - w^T * x' - b|. Notice that w^T * x_n + b is epsilon (from our normalization above), and that w^T * x' + b is 0, as this is just a point on our plane. Thus, d = epsilon / |w|. Notice that maximizing this distance subject to our constraint of finding the x_n and having |w^T * x_n + b| = epsilon is a difficult optimization problem. What we can do is restructure this optimization problem as minimizing 1/2 * w^T * w subject to the first two constraints in the picture you attached, that is, |y_i - f(x_i, w)| <= epsilon. You may think that I have forgotten the slack variables, and this is true, but when just focusing on this term and ignoring the second term, we ignore the slack variables for now, I will bring them back later. The reason these two optimizations are equivalent is not obvious, but the underlying reason lies in discrimination boundaries, which you are free to read more about (it's a lot more math that frankly I don't think this answer needs more of). Then, note that minimizing 1/2 * w^T * w is the same as minimizing 1/2 * |w|^2, which is the desired result we were hoping for. End of the Heavy Math
Now, notice that we want to make the margin big, but not so big that includes noisy outliers like the one in the picture you provided.
Thus, we introduce a second term. To motivate the margin down to a reasonable size the slack variables are introduced, (I will call them p and p* because I don't want to type out "psi" every time). These slack variables will ignore everything in the margin, i.e. those are the points that do not harm the objective and the ones that are "correct" in terms of their regression status. However, the points outside the margin are outliers, they do not reflect well on the regression, so we penalize them simply for existing. The slack error function that is given there is relatively easy to understand, it just adds up the slack error of every point (p_i + p*_i) for i = 1,...,N, and then multiplies by a modulating constant C which determines the relative importance of the two terms. A low value of C means that we are okay with having outliers, so the margin will be thinned and more outliers will be produced. A high value of C indicates that we care a lot about not having slack, so the margin will be made bigger to accommodate these outliers at the expense of representing the overall data less well.
A few things to note about p and p*. First, note that they are both always >= 0. The constraint in your picture shows this, but it also intuitively makes sense as slack should always add to the error, so it is positive. Second, notice that if p > 0, then p* = 0 and vice versa as an outlier can only be on one side of the margin. Last, all points inside the margin will have p and p* be 0, since they are fine where they are and thus do not contribute to the loss.
Notice that with the introduction of the slack variables, if you have any outliers then you won't want the condition from the first term, that is, |w^T * x_n + b| = epsilon as the x_n would be this outlier, and your whole model would be screwed up. What we allow for, then, is to change the constraint to be |w^T * x_n + b| = epsilon + (p + p*). When translated to the new optimization's constraint, we get the full constraint from the picture you attached, that is, |y_i - f(x_i, w)| <= epsilon + p + p*. (I combined the two equations into one here, but you could rewrite them as the picture is and that would be the same thing).
Hopefully after covering all this up, the motivation for the objective function and the corresponding slack variables makes sense to you.
If I understand the question correctly, you also want code to calculate this objective/loss function, which I think isn't too bad. I have not tested this (yet), but I think this should be what you want.
# Function for calculating the error/loss for a SVM. I assume that:
# - 'x' is 2d array representing the vectors of the data points
# - 'y' is an array representing the values each vector actually gives
# - 'weights' is an array of weights that we tune for the regression
# - 'epsilon' is a scalar representing the breadth of our margin.
def optimization_objective(x, y, weights, epsilon):
# Calculates first term of objective (note that norm^2 = dot product)
margin_term = np.dot(weight, weight) / 2
# Now calculate second term of objective. First get the sum of slacks.
slack_sum = 0
for i in range(len(x)): # For each observation
# First find the absolute distance between expected and observed.
diff = abs(hypothesis(x[i]) - y[i])
# Now subtract epsilon
diff -= epsilon
# If diff is still more than 0, then it is an 'outlier' and will have slack.
slack = max(0, diff)
# Add it to the slack sum
slack_sum += slack
# Now we have the slack_sum, so then multiply by C (I picked this as 1 aribtrarily)
C = 1
slack_term = C * slack_sum
# Now, simply return the sum of the two terms, and we are done.
return margin_term + slack_term
I got this function working on my computer with small data, and you may have to change it a little to work with your data if, for example, the arrays are structured differently, but the idea is there. Also, I am not the most proficient with python, so this may not be the most efficient implementation, but my intent was to make it understandable.
Now, note that this just calculates the error/loss (whatever you want to call it). To actually minimize it requires going into Lagrangians and intense quadratic programming which is a much more daunting task. There are libraries available for doing this but if you want to do this library free as you are doing with this, I wish you good luck because doing that is not a walk in the park.
Finally, I would like to note that most of this information I got from notes I took in my ML class I took last year, and the professor (Dr. Abu-Mostafa) was a great help to have me learn the material. The lectures for this class are online (by the same prof), and the pertinent ones for this topic are here and here (although in my very biased opinion you should watch all the lectures, they were a great help). Leave a comment/question if you need anything cleared up or if you think I made a mistake somewhere. If you still don't understand, I can try to edit my answer to make more sense. Hope this helps!

Algorithm to smooth a curve while keeping the area under it constant

Consider a discrete curve defined by the points (x1,y1), (x2,y2), (x3,y3), ... ,(xn,yn)
Define a constant SUM = y1+y2+y3+...+yn. Say we change the value of some k number of y points (increase or decrease) such that the total sum of these changed points is less than or equal to the constant SUM.
What would be the best possible manner to adjust the other y points given the following two conditions:
The total sum of the y points (y1'+y2'+...+yn') should remain constant ie, SUM.
The curve should retain as much of its original shape as possible.
A simple solution would be to define some delta as follows:
delta = (ym1' + ym2' + ym3' + ... + ymk') - (ym1 + ym2 + ym3 + ... + ymk')
and to distribute this delta over the rest of the points equally. Here ym1' is the value of the modified point after modification and ym1 is the value of the modified point before modification to give delta as the total difference in modification.
However this would not ensure a totally smoothed curve as area near changed points would appear ragged. Does a better solution/algorithm exist for the this problem?
I've used the following approach, though it is a bit OTT.
Consider adding d[i] to y[i], to get s[i], the smoothed value.
We seek to minimise
S = Sum{ 1<=i<N-1 | sqr( s[i+1]-2*s[i]+s[i-1] } + f*Sum{ 0<=i<N | sqr( d[i])}
The first term is a sum of the squares of (an approximate) second derivative of the curve, and the second term penalises moving away from the original. f is a (positive) constant. A little algebra recasts this as
S = sqr( ||A*d - b||)
where the matrix A has a nice structure, and indeed A'*A is penta-diagonal, which means that the normal equations (ie d = Inv(A'*A)*A'*b) can be solved efficiently. Note that d is computed directly, there is no need to initialise it.
Given the solution d to this problem we can compute the solution d^ to the same problem but with the constraint One'*d = 0 (where One is the vector of all ones) like this
d^ = d - (One'*d/Q) * e
e = Inv(A'*A)*One
Q = One'*e
What value to use for f? Well a simple approach is to try out this procedure on sample curves for various fs and pick a value that looks good. Another approach is to pick a estimate of smoothness, for example the rms of the second derivative, and then a value that should attain, and then search for an f that gives that value. As a general rule, the bigger f is the less smooth the smoothed curve will be.
Some motivation for all this. The aim is to find a 'smooth' curve 'close' to a given one. For this we need a measure of smoothness (the first term in S) and a measure of closeness (the second term. Why these measures? Well, each are easy to compute, and each are quadratic in the variables (the d[]); this will mean that the problem becomes an instance of linear least squares for which there are efficient algorithms available. Moreover each term in each sum depends on nearby values of the variables, which will in turn mean that the 'inverse covariance' (A'*A) will have a banded structure and so the least squares problem can be solved efficiently. Why introduce f? Well, if we didn't have f (or set it to 0) we could minimise S by setting d[i] = -y[i], getting a perfectly smooth curve s[] = 0, which has nothing to do with the y curve. On the other hand if f is gigantic, then to minimise s we should concentrate on the second term, and set d[i] = 0, and our 'smoothed' curve is just the original. So it's reasonable to suppose that as we vary f, the corresponding solutions will vary between being very smooth but far from y (small f) and being close to y but a bit rough (large f).
It's often said that the normal equations, whose use I advocate here, are a bad way to solve least squares problems, and this is generally true. However with 'nice' banded systems -- like the one here -- the loss of stability through using the normal equations is not so great, while the gain in speed is so great. I've used this approach to smooth curves with many thousands of points in a reasonable time.
To see what A is, consider the case where we had 4 points. Then our expression for S comes down to:
sqr( s[2] - 2*s[1] + s[0]) + sqr( s[3] - 2*s[2] + s[1]) + f*(d[0]*d[0] + .. + d[3]*d[3]).
If we substitute s[i] = y[i] + d[i] in this we get, for example,
s[2] - 2*s[1] + s[0] = d[2]-2*d[1]+d[0] + y[2]-2*y[1]+y[0]
and so we see that for this to be sqr( ||A*d-b||) we should take
A = ( 1 -2 1 0)
( 0 1 -2 1)
( f 0 0 0)
( 0 f 0 0)
( 0 0 f 0)
( 0 0 0 f)
and
b = ( -(y[2]-2*y[1]+y[0]))
( -(y[3]-2*y[2]+y[1]))
( 0 )
( 0 )
( 0 )
( 0 )
In an implementation, though, you probably wouldn't want to form A and b, as they are only going to be used to form the normal equation terms, A'*A and A'*b. It would be simpler to accumulate these directly.
This is a constrained optimization problem. The functional to be minimized is the integrated difference of the original curve and the modified curve. The constraints are the area under the curve and the new locations of the modified points. It is not easy to write such codes on your own. It is better to use some open source optimization codes, like this one: ool.
what about to keep the same dynamic range?
compute original min0,max0 y-values
smooth y-values
compute new min1,max1 y-values
linear interpolate all values to match original min max y
y=min1+(y-min1)*(max0-min0)/(max1-min1)
that is it
Not sure for the area but this should keep the shape much closer to original one. I got this Idea right now while reading your question and now I face similar problem so I try to code it and try right now anyway +1 for the getting me this Idea :)
You can adapt this and combine with the area
So before this compute the area and apply #1..#4 and after that compute new area. Then multiply all values by old_area/new_area ratio. If you have also negative values and not computing absolute area then you have to handle positive and negative areas separately and find multiplication ration to best fit original area for booth at once.
[edit1] some results for constant dynamic range
As you can see the shape is slightly shifting to the left. Each image is after applying few hundreds smooth operations. I am thinking of subdivision to local min max intervals to improve this ...
[edit2] have finished the filter for mine own purposes
void advanced_smooth(double *p,int n)
{
int i,j,i0,i1;
double a0,a1,b0,b1,dp,w0,w1;
double *p0,*p1,*w; int *q;
if (n<3) return;
p0=new double[n<<2]; if (p0==NULL) return;
p1=p0+n;
w =p1+n;
q =(int*)((double*)(w+n));
// compute original min,max
for (a0=p[0],i=0;i<n;i++) if (a0>p[i]) a0=p[i];
for (a1=p[0],i=0;i<n;i++) if (a1<p[i]) a1=p[i];
for (i=0;i<n;i++) p0[i]=p[i]; // store original values for range restoration
// compute local min max positions to p1[]
dp=0.01*(a1-a0); // min delta treshold
// compute first derivation
p1[0]=0.0; for (i=1;i<n;i++) p1[i]=p[i]-p[i-1];
for (i=1;i<n-1;i++) // eliminate glitches
if (p1[i]*p1[i-1]<0.0)
if (p1[i]*p1[i+1]<0.0)
if (fabs(p1[i])<=dp)
p1[i]=0.5*(p1[i-1]+p1[i+1]);
for (i0=1;i0;) // remove zeros from derivation
for (i0=0,i=0;i<n;i++)
if (fabs(p1[i])<dp)
{
if ((i> 0)&&(fabs(p1[i-1])>=dp)) { i0=1; p1[i]=p1[i-1]; }
else if ((i<n-1)&&(fabs(p1[i+1])>=dp)) { i0=1; p1[i]=p1[i+1]; }
}
// find local min,max to q[]
q[n-2]=0; q[n-1]=0; for (i=1;i<n-1;i++) if (p1[i]*p1[i-1]<0.0) q[i-1]=1; else q[i-1]=0;
for (i=0;i<n;i++) // set sign as +max,-min
if ((q[i])&&(p1[i]<-dp)) q[i]=-q[i]; // this shifts smooth curve to the left !!!
// compute weights
for (i0=0,i1=1;i1<n;i0=i1,i1++) // loop through all local min,max intervals
{
for (;(!q[i1])&&(i1<n-1);i1++); // <i0,i1>
b0=0.5*(p[i0]+p[i1]);
b1=fabs(p[i1]-p[i0]);
if (b1>=1e-6)
for (b1=0.35/b1,i=i0;i<=i1;i++) // compute weights bigger near local min max
w[i]=0.8+(fabs(p[i]-b0)*b1);
}
// smooth few times
for (j=0;j<5;j++)
{
for (i=0;i<n ;i++) p1[i]=p[i]; // store data to avoid shifting by using half filtered data
for (i=1;i<n-1;i++) // FIR smooth filter
{
w0=w[i];
w1=(1.0-w0)*0.5;
p[i]=(w1*p1[i-1])+(w0*p1[i])+(w1*p1[i+1]);
}
for (i=1;i<n-1;i++) // avoid local min,max shifting too much
{
if (q[i]>0) // local max
{
if (p[i]<p[i-1]) p[i]=p[i-1]; // can not be lower then neigbours
if (p[i]<p[i+1]) p[i]=p[i+1];
}
if (q[i]<0) // local min
{
if (p[i]>p[i-1]) p[i]=p[i-1]; // can not be higher then neigbours
if (p[i]>p[i+1]) p[i]=p[i+1];
}
}
}
for (i0=0,i1=1;i1<n;i0=i1,i1++) // loop through all local min,max intervals
{
for (;(!q[i1])&&(i1<n-1);i1++); // <i0,i1>
// restore original local min,max
a0=p0[i0]; b0=p[i0];
a1=p0[i1]; b1=p[i1];
if (a0>a1)
{
dp=a0; a0=a1; a1=dp;
dp=b0; b0=b1; b1=dp;
}
b1-=b0;
if (b1>=1e-6)
for (dp=(a1-a0)/b1,i=i0;i<=i1;i++)
p[i]=a0+((p[i]-b0)*dp);
}
delete[] p0;
}
so p[n] is the input/output data. There are few things that can be tweaked like:
weights computation (constants 0.8 and 0.35 means weights are <0.8,0.8+0.35/2>)
number of smooth passes (now 5 in the for loop)
the bigger the weight the less the filtering 1.0 means no change
The main Idea behind is:
find local extremes
compute weights for smoothing
so near local extremes are almost none change of the output
smooth
repair dynamic range per each interval between all local extremes
[Notes]
I did also try to restore the area but that is incompatible with mine task because it distorts the shape a lot. So if you really need the area then focus on that and not on the shape. The smoothing causes signal to shrink mostly so after area restoration the shape rise on magnitude.
Actual filter state has none markable side shifting of shape (which was the main goal for me). Some images for more bumpy signal (the original filter was extremly poor on this):
As you can see no visible signal shape shifting. The local extremes has tendency to create sharp spikes after very heavy smoothing but that was expected
Hope it helps ...

How to compute frequency of data using FFT?

I want to know the frequency of data. I had a little bit idea that it can be done using FFT, but I am not sure how to do it. Once I passed the entire data to FFT, then it is giving me 2 peaks, but how can I get the frequency?
Thanks a lot in advance.
Here's what you're probably looking for:
When you talk about computing the frequency of a signal, you probably aren't so interested in the component sine waves. This is what the FFT gives you. For example, if you sum sin(2*pi*10x)+sin(2*pi*15x)+sin(2*pi*20x)+sin(2*pi*25x), you probably want to detect the "frequency" as 5 (take a look at the graph of this function). However, the FFT of this signal will detect the magnitude of 0 for the frequency 5.
What you are probably more interested in is the periodicity of the signal. That is, the interval at which the signal becomes most like itself. So most likely what you want is the autocorrelation. Look it up. This will essentially give you a measure of how self-similar the signal is to itself after being shifted over by a certain amount. So if you find a peak in the autocorrelation, that would indicate that the signal matches up well with itself when shifted over that amount. There's a lot of cool math behind it, look it up if you are interested, but if you just want it to work, just do this:
Window the signal, using a smooth window (a cosine will do. The window should be at least twice as large as the largest period you want to detect. 3 times as large will give better results). (see http://zone.ni.com/devzone/cda/tut/p/id/4844 if you are confused).
Take the FFT (however, make sure the FFT size is twice as big as the window, with the second half being padded with zeroes. If the FFT size is only the size of the window, you will effectively be taking the circular autocorrelation, which is not what you want. see https://en.wikipedia.org/wiki/Discrete_Fourier_transform#Circular_convolution_theorem_and_cross-correlation_theorem )
Replace all coefficients of the FFT with their square value (real^2+imag^2). This is effectively taking the autocorrelation.
Take the iFFT
Find the largest peak in the iFFT. This is the strongest periodicity of the waveform. You can actually be a little more clever in which peak you pick, but for most purposes this should be enough. To find the frequency, you just take f=1/T.
Suppose x[n] = cos(2*pi*f0*n/fs) where f0 is the frequency of your sinusoid in Hertz, n=0:N-1, and fs is the sampling rate of x in samples per second.
Let X = fft(x). Both x and X have length N. Suppose X has two peaks at n0 and N-n0.
Then the sinusoid frequency is f0 = fs*n0/N Hertz.
Example: fs = 8000 samples per second, N = 16000 samples. Therefore, x lasts two seconds long.
Suppose X = fft(x) has peaks at 2000 and 14000 (=16000-2000). Therefore, f0 = 8000*2000/16000 = 1000 Hz.
If you have a signal with one frequency (for instance:
y = sin(2 pi f t)
With:
y time signal
f the central frequency
t time
Then you'll get two peaks, one at a frequency corresponding to f, and one at a frequency corresponding to -f.
So, to get to a frequency, can discard the negative frequency part. It is located after the positive frequency part. Furthermore, the first element in the array is a dc-offset, so the frequency is 0. (Beware that this offset is usually much more than 0, so the other frequency components might get dwarved by it.)
In code: (I've written it in python, but it should be equally simple in c#):
import numpy as np
from pylab import *
x = np.random.rand(100) # create 100 random numbers of which we want the fourier transform
x = x - mean(x) # make sure the average is zero, so we don't get a huge DC offset.
dt = 0.1 #[s] 1/the sampling rate
fftx = np.fft.fft(x) # the frequency transformed part
# now discard anything that we do not need..
fftx = fftx[range(int(len(fftx)/2))]
# now create the frequency axis: it runs from 0 to the sampling rate /2
freq_fftx = np.linspace(0,2/dt,len(fftx))
# and plot a power spectrum
plot(freq_fftx,abs(fftx)**2)
show()
Now the frequency is located at the largest peak.
If you are looking at the magnitude results from an FFT of the type most common used, then a strong sinusoidal frequency component of real data will show up in two places, once in the bottom half, plus its complex conjugate mirror image in the top half. Those two peaks both represent the same spectral peak and same frequency (for strictly real data). If the FFT result bin numbers start at 0 (zero), then the frequency of the sinusoidal component represented by the bin in the bottom half of the FFT result is most likely.
Frequency_of_Peak = Data_Sample_Rate * Bin_number_of_Peak / Length_of_FFT ;
Make sure to work out your proper units within the above equation (to get units of cycles per second, per fortnight, per kiloparsec, etc.)
Note that unless the wavelength of the data is an exact integer submultiple of the FFT length, the actual peak will be between bins, thus distributing energy among multiple nearby FFT result bins. So you may have to interpolate to better estimate the frequency peak. Common interpolation methods to find a more precise frequency estimate are 3-point parabolic and Sinc convolution (which is nearly the same as using a zero-padded longer FFT).
Assuming you use a discrete Fourier transform to look at frequencies, then you have to be careful about how to interpret the normalized frequencies back into physical ones (i.e. Hz).
According to the FFTW tutorial on how to calculate the power spectrum of a signal:
#include <rfftw.h>
...
{
fftw_real in[N], out[N], power_spectrum[N/2+1];
rfftw_plan p;
int k;
...
p = rfftw_create_plan(N, FFTW_REAL_TO_COMPLEX, FFTW_ESTIMATE);
...
rfftw_one(p, in, out);
power_spectrum[0] = out[0]*out[0]; /* DC component */
for (k = 1; k < (N+1)/2; ++k) /* (k < N/2 rounded up) */
power_spectrum[k] = out[k]*out[k] + out[N-k]*out[N-k];
if (N % 2 == 0) /* N is even */
power_spectrum[N/2] = out[N/2]*out[N/2]; /* Nyquist freq. */
...
rfftw_destroy_plan(p);
}
Note it handles data lengths that are not even. Note particularly if the data length is given, FFTW will give you a "bin" corresponding to the Nyquist frequency (sample rate divided by 2). Otherwise, you don't get it (i.e. the last bin is just below Nyquist).
A MATLAB example is similar, but they are choosing the length of 1000 (an even number) for the example:
N = length(x);
xdft = fft(x);
xdft = xdft(1:N/2+1);
psdx = (1/(Fs*N)).*abs(xdft).^2;
psdx(2:end-1) = 2*psdx(2:end-1);
freq = 0:Fs/length(x):Fs/2;
In general, it can be implementation (of the DFT) dependent. You should create a test pure sine wave at a known frequency and then make sure the calculation gives the same number.
Frequency = speed/wavelength.
Wavelength is the distance between the two peaks.

Averaging angles... Again

I want to calculate the average of a set of angles, which represents source bearing (0 to 360 deg) - (similar to wind-direction)
I know it has been discussed before (several times). The accepted answer was Compute unit vectors from the angles and take the angle of their average.
However this answer defines the average in a non intuitive way. The average of 0, 0 and 90 will be atan( (sin(0)+sin(0)+sin(90)) / (cos(0)+cos(0)+cos(90)) ) = atan(1/2)= 26.56 deg
I would expect the average of 0, 0 and 90 to be 30 degrees.
So I think it is fair to ask the question again: How would you calculate the average, so such examples will give the intuitive expected answer.
Edit 2014:
After asking this question, I've posted an article on CodeProject which offers a thorough analysis. The article examines the following reference problems:
Given time-of-day [00:00-24:00) for each birth occurred in US in the year 2000 - Calculate the mean birth time-of-day
Given a multiset of direction measurements from a stationary transmitter to a stationary receiver, using a measurement technique with a wrapped normal distributed error – Estimate the direction.
Given a multiset of azimuth estimates between two points, made by “ordinary” humans (assuming to subject to a wrapped truncated normal distributed error) – Estimate the direction.
[Note the OP's question (but not title) appears to have changed to a rather specialised question ("...the average of a SEQUENCE of angles where each successive addition does not differ from the running mean by more than a specified amount." ) - see #MaR comment and mine. My following answer addresses the OP's title and the bulk of the discussion and answers related to it.]
This is not a question of logic or intuition, but of definition. This has been discussed on SO before without any real consensus. Angles should be defined within a range (which might be -PI to +PI, or 0 to 2*PI or might be -Inf to +Inf. The answers will be different in each case.
The word "angle" causes confusion as it means different things. The angle of view is an unsigned quantity (and is normally PI > theta > 0. In that cases "normal" averages might be useful. Angle of rotation (e.g. total rotation if an ice skater) might or might not be signed and might include theta > 2PI and theta < -2PI.
What is defined here is angle = direction whihch requires vectors. If you use the word "direction" instead of "angle" you will have captured the OP's (apparent original) intention and it will help to move away from scalar quantities.
Wikipedia shows the correct approach when angles are defined circularly such that
theta = theta+2*PI*N = theta-2*PI*N
The answer for the mean is NOT a scalar but a vector. The OP may not feel this is intuitive but it is the only useful correct approach. We cannot redefine the square root of -4 to be -2 because it's more initutive - it has to be +-2*i. Similarly the average of bearings -90 degrees and +90 degrees is a vector of zero length, not 0.0 degrees.
Wikipedia (http://en.wikipedia.org/wiki/Mean_of_circular_quantities) has a special section and states (The equations are LaTeX and can be seen rendered in Wikipedia):
Most of the usual means fail on
circular quantities, like angles,
daytimes, fractional parts of real
numbers. For those quantities you need
a mean of circular quantities.
Since the arithmetic mean is not
effective for angles, the following
method can be used to obtain both a
mean value and measure for the
variance of the angles:
Convert all angles to corresponding
points on the unit circle, e.g., α to
(cosα,sinα). That is convert polar
coordinates to Cartesian coordinates.
Then compute the arithmetic mean of
these points. The resulting point will
lie on the unit disk. Convert that
point back to polar coordinates. The
angle is a reasonable mean of the
input angles. The resulting radius
will be 1 if all angles are equal. If
the angles are uniformly distributed
on the circle, then the resulting
radius will be 0, and there is no
circular mean. In other words, the
radius measures the concentration of
the angles.
Given the angles
\alpha_1,\dots,\alpha_n the mean is
computed by
M \alpha = \operatorname{atan2}\left(\frac{1}{n}\cdot\sum_{j=1}^n
\sin\alpha_j,
\frac{1}{n}\cdot\sum_{j=1}^n
\cos\alpha_j\right)
using the atan2 variant of the
arctangent function, or
M \alpha = \arg\left(\frac{1}{n}\cdot\sum_{j=1}^n
\exp(i\cdot\alpha_j)\right)
using complex numbers.
Note that in the OP's question an angle of 0 is purely arbitrary - there is nothing special about wind coming from 0 as opposed to 180 (except in this hemisphere it's colder on the bicycle). Try changing 0,0,90 to 289, 289, 379 and see how the simple arithmetic no longer works.
(There are some distributions where angles of 0 and PI have special significance but they are not in scope here).
Here are some intense previous discussions which mirror the current spread of views :-)
Link
How do you calculate the average of a set of circular data?
http://forums.xkcd.com/viewtopic.php?f=17&t=22435
http://www.allegro.cc/forums/thread/595008
Thank you all for helping me see my problem more clearly.
I found what I was looking for.
It is called Mitsuta method.
The inputs and output are in the range [0..360).
This method is good for averaging data that was sampled using constant sampling intervals.
The method assumes that the difference between successive samples is less than 180 degrees (which means that if we won't sample fast enough, a 330 degrees change in the sampled signal would be incorrectly detected as a 30 degrees change in the other direction and will insert an error into the calculation). Nyquist–Shannon sampling theorem anybody ?
Here is a c++ code:
double AngAvrg(const vector<double>& Ang)
{
vector<double>::const_iterator iter= Ang.begin();
double fD = *iter;
double fSigD= *iter;
while (++iter != Ang.end())
{
double fDelta= *iter - fD;
if (fDelta < -180.) fD+= fDelta + 360.;
else if (fDelta > 180.) fD+= fDelta - 360.;
else fD+= fDelta ;
fSigD+= fD;
}
double fAvrg= fSigD / Ang.size();
if (fAvrg >= 360.) return fAvrg -360.;
if (fAvrg < 0. ) return fAvrg +360.;
return fAvrg ;
}
It is explained on page 51 of Meteorological Monitoring Guidance for Regulatory Modeling Applications (PDF)(171 pp, 02-01-2000, 454-R-99-005)
Thank you MaR for sending the link as a comment.
If the sampled data is constant, but our sampling device has an inaccuracy with a Von Mises distribution, a unit-vectors calculation will be appropriate.
This is incorrect on every level.
Vectors add according to the rules of vector addition. The "intuitive, expected" answer might not be that intuitive.
Take the following example. If I have one unit vector (1, 0), with origin at (0,0) that points in the +x-direction and another (-1, 0) that also has its origin at (0,0) that points in the -x-direction, what should the "average" angle be?
If I simply add the angles and divide by two, I can argue that the "average" is either +90 or -90. Which one do you think it should be?
If I add the vectors according to the rules of vector addition (component by component), I get the following:
(1, 0) + (-1, 0) = (0, 0)
In polar coordinates, that's a vector with zero magnitude and angle zero.
So what should the "average" angle be? I've got three different answers here for a simple case.
I think the answer is that vectors don't obey the same intuition that numbers do, because they have both magnitude and direction. Maybe you should describe what problem you're solving a bit better.
Whatever solution you decide on, I'd advise you to base it on vectors. It'll always be correct that way.
What does it even mean to average source bearings? Start by answering that question, and you'll get closer to being to define what you mean by the average of angles.
In my mind, an angle with tangent equal to 1/2 is the right answer. If I have a unit force pushing me in the direction of the vector (1, 0), another force pushing me in the direction of the vector (1, 0) and third force pushing me in the direction of the vector (0, 1), then the resulting force (the sum of these forces) is the force pushing me in the direction of (1, 2). These the the vectors representing the bearings 0 degrees, 0 degrees and 90 degrees. The angle represented by the vector (1, 2) has tangent equal to 1/2.
Responding to your second edit:
Let's say that we are measuring wind direction. Our 3 measurements were 0, 0, and 90 degrees. Since all measurements are equivalently reliable, why shouldn't our best estimate of the wind direction be 30 degrees? setting it to 25.56 degrees is a bias toward 0...
Okay, here's an issue. The unit vector with angle 0 doesn't have the same mathematical properties that the real number 0 has. Using the notation 0v to represent the vector with angle 0, note that
0v + 0v = 0v
is false but
0 + 0 = 0
is true for real numbers. So if 0v represents wind with unit speed and angle 0, then 0v + 0v is wind with double unit speed and angle 0. And then if we have a third wind vector (which I'll representing using the notation 90v) which has angle 90 and unit speed, then the wind that results from the sum of these vectors does have a bias because it's traveling at twice unit speed in the horizontal direction but only unit speed in the vertical direction.
In my opinion, this is about angles, not vectors. For that reason the average of 360 and 0 is truly 180.
The average of one turn and no turns should be half a turn.
Edit: Equivalent, but more robust algorithm (and simpler):
divide angles into 2 groups, [0-180) and [180-360)
numerically average both groups
average the 2 group averages with proper weighting
if wraparound occurred, correct by 180˚
This works because number averaging works "logically" if all the angles are in the same hemicircle. We then delay getting wraparound error until the very last step, where it is easily detected and corrected. I also threw in some code for handling opposite angle cases. If the averages are opposite we favor the hemisphere that had more angles in it, and in the case of equal angles in both hemispheres we return None because no average would make sense.
The new code:
def averageAngles2(angles):
newAngles = [a % 360 for a in angles];
smallAngles = []
largeAngles = []
# split the angles into 2 groups: [0-180) and [180-360)
for angle in newAngles:
if angle < 180:
smallAngles.append(angle)
else:
largeAngles.append(angle)
smallCount = len(smallAngles)
largeCount = len(largeAngles)
#averaging each of the groups will work with standard averages
smallAverage = sum(smallAngles) / float(smallCount) if smallCount else 0
largeAverage = sum(largeAngles) / float(largeCount) if largeCount else 0
if smallCount == 0:
return largeAverage
if largeCount == 0:
return smallAverage
average = (smallAverage * smallCount + largeAverage * largeCount) / \
float(smallCount + largeCount)
if largeAverage < smallAverage + 180:
# average will not hit wraparound
return average
elif largeAverage > smallAverage + 180:
# average will hit wraparound, so will be off by 180 degrees
return (average + 180) % 360
else:
# opposite angles: return whichever has more weight
if smallCount > largeCount:
return smallAverage
elif smallCount < largeCount:
return largeAverage
else:
return None
>>> averageAngles2([0, 0, 90])
30.0
>>> averageAngles2([30, 350])
10.0
>>> averageAngles2([0, 200])
280.0
Here's a slightly naive algorithm:
remove all oposite angles from the list
take a pair of angles
rotate them to the first and second quadrant and average them
rotate average angle back by same amount
for each remaining angle, average in same way, but with successively increasing weight to the composite angle
some python code (step 1 not implemented)
def averageAngles(angles):
newAngles = [a % 360 for a in angles];
average = 0
weight = 0
for ang in newAngles:
theta = 0
if 0 < ang - average <= 180:
theta = 180 - ang
else:
theta = 180 - average
r_ang = (ang + theta) % 360
r_avg = (average + theta) % 360
average = ((r_avg * weight + r_ang) / float(weight + 1) - theta) % 360
weight += 1
return average
Here's the answer I gave to this same question:
How do you calculate the average of a set of circular data?
It gives answers inline with what the OP says he wants, but attention should be paid to this:
"I would also like to stress that even though this is a true average of angles, unlike the vector solutions, that does not necessarily mean it is the solution you should be using, the average of the corresponding unit vectors may well be the value you actually should to be using."
You are correct that the accepted answer of using traditional average is wrong.
An average of a set of points x_1 ... x_n in a metric space X is an element x in X that minimizes the sum of distances squares to each point (See Frechet mean). If you try to find this minimum using simple calculus with regular real numbers, you will recover the standard "add up and divide by n" formula.
For an angle, our elements are actually points on the unit circle S1. Our metric isn't euclidean distance, but arc length, which is proportional to angle.
So, the average angle is the one that minimizes the square of the angle difference between each other angle. In other words,
if you have a function angleBetween(a, b) you want to find the angle a
such that sum over i of angleBetween(a_i, a) is minimized.
This is an optimization problem which can be solved using a numerical optimizer. Several of the answers here claim to provide simpler closed forms, or at least better approximations.
Statistics
As you point out in your article, you need to assume errors follow a Gaussian distribution to justify using least squares as the maximum likelyhood estimator. So in this application, where is the error? Is the random error in the position of two things, and the angle is just the normal of the line between them? If so, that normal will not follow a Gaussian distribution, even if the error in point position does. Taking means of angles only really makes sense if the random error is observed in the angle itself.
You could do this: Say you have a set of angles in an array angle, then to compute the array first do: angle[i] = angle[i] mod 360, now perform a simple average over the array. So when you have 360, 10, 20, you are averaging 0, 10 and 20 - the results are intuitive.
What is wrong with taking the set of angles as real values and just computing the arithmetic average of those numbers? Then you would get the intuitive (0+0+90)/3 = 30 deg.
Edit: Thanks for useful comments and pointing out that angles may exceed 360. I believe the answer could be the normal arithmetic average reduced "modulo" 360: we sum all the values, divide by the number of angles and then subtract/add a multiple of 360 so that the result lies in the interval [0..360).
I think the problem stems from how you treat angles greater than 180 (and those greater than 360 as well). If you reduce the angles to a range of +180 to -180 before adding them to the total, you get something more reasonable:
int AverageOfAngles(int angles[], int count)
{
int total = 0;
for (int index = 0; index < count; index++)
{
int angle = angles[index] % 360;
if (angle > 180) { angle -= 360; }
total += angle;
}
return (int)((float)total/count);
}
Maybe you could represent angles as quaternions and take average of these quaternions and convert it back to angle.
I don't know If it gives you what you want because quaternions are rather rotations than angles. I also don't know if it will give you anything different from vector solution.
Quaternions in 2D simplify to complex numbers so I guess It's just vectors but maybe some interesting quaternion averaging algorithm like http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20070017872_2007014421.pdf when simplified to 2D will behave better than just vector average.
Here you go! The reference is https://www.wxforum.net/index.php?topic=8660.0
def avgWind(directions):
sinSum = 0
cosSum = 0
d2r = math.pi/180 #degree to radian
r2d = 180/math.pi
for i in range(len(directions)):
sinSum += math.sin(directions[i]*d2r)
cosSum += math.cos(directions[i]*d2r)
return ((r2d*(math.atan2(sinSum, cosSum)) + 360) % 360)
a= np.random.randint(low=0, high=360, size=6)
print(a)
avgWind(a)

Resources