If I have time series data -- a list of {x,y} pairs -- and want to smooth it, I can use an Exponential Moving Average like so:
EMA[data_, alpha_:.1] :=
Transpose # {#1, ExponentialMovingAverage[#2, alpha]}& ## Transpose#data
How would you implement double exponential smoothing?
DEMA[data_, alpha_, gamma_] := (* unstub me! *)
If it figured out good values for alpha and gamma by itself, that would be extra nice.
Related question about how to handle the case that there are gaps in the time-series, ie, the samples are not uniformly spread out over time:
Exponential Moving Average Sampled at Varying Times
I am not sure this is the fastest code one can get, yet the following seems to do it:
DEMA[data_, alpha_, gamma_] :=
Module[{st = First[data], bt = data[[2]] - data[[1]], btnew, stnew},
Reap[
Sow[st];
Do[
stnew = alpha y + (1 - alpha) (st + bt);
btnew = gamma (stnew - st) + (1 - gamma) bt;
Sow[stnew];
st = stnew;
bt = btnew;
, {y, Rest#data}]][[-1, 1]
]]
This is almost direct from the page you referenced. You can modify the initial condition for b in the source code. Setting bt initially to zero recovers the singly exponential smoothing.
In[81]:= DEMA[{a, b, c, d}, alpha, gamma]
Out[81]= {a, (1 - alpha) b + alpha b,
alpha c + (1 - alpha) ((1 - alpha) b +
alpha b + (-a + b) (1 - gamma) + (-a + (1 - alpha) b +
alpha b) gamma),
alpha d + (1 -
alpha) (alpha c + (1 -
gamma) ((-a + b) (1 - gamma) + (-a + (1 - alpha) b +
alpha b) gamma) + (1 - alpha) ((1 - alpha) b +
alpha b + (-a + b) (1 - gamma) + (-a + (1 - alpha) b +
alpha b) gamma) +
gamma (-(1 - alpha) b - alpha b +
alpha c + (1 - alpha) ((1 - alpha) b +
alpha b + (-a + b) (1 - gamma) + (-a + (1 - alpha) b +
alpha b) gamma)))}
Here is my formulation:
DEMA[data_, alpha_, gamma_] :=
FoldList[
Module[{x, y},
x = #[[1]] + #[[2]];
y = #2 - alpha x;
{y + x, #[[2]] + gamma * y}
] &,
{data[[1]], data[[2]] - data[[1]]},
alpha * Rest#data
][[All, 1]]
Related
I need to find the distance of O and N with the diagonales (with a 90° angle/ the shortest). I found a formula online, but why in this case, it does not return the good distance ?
And if possible, how to normalize the result (e.g. O is at20% of the diagonale?)
import numpy as np
import math
O = (1,3)
N = (3,2)
r = np.arange(24).reshape((6, 4))
def get_diagonal_distance(centroid, img_test):
x1, y1 = centroid
a, b = img.shape[1], img.shape[0]
c = np.sqrt(np.square(a) + np.square(b))
d = abs((a * x1 + b * y1 + c)) / (math.sqrt(a * a + b * b))
return d
print(f"diagonal d: {get_diagonal_distance(O, r): .4f}")
d = abs((a * x1 + b * y1 + c)) / (math.sqrt(a * a + b * b))
Your computation is wrong because a, b and c refer to the coefficients of the equation of the line ax+by+c=0
import numpy as np
O = (1,3)
N = (3,2)
M, L, I, H = (-1,-2), (3, -2), (3, 2), (-1, 2)
# Following your initial idea
def get_diagonal_distance(diagonal_extremes, point):
diagonal_vector = (diagonal_extremes[1][0] - diagonal_extremes[0][0],
diagonal_extremes[1][1] - diagonal_extremes[0][1])
a = diagonal_vector[1]
b = - diagonal_vector[0]
c = - diagonal_extremes[0][0]*a - diagonal_extremes[0][1]*b
x, y = point[0], point[1]
return abs((a * x + b * y + c)) / (np.sqrt(a * a + b * b))
# Taking advantage of numpy
def distance_from_diagonal(diagonal_extremes, point):
u = (diagonal_extremes[1][0] - diagonal_extremes[0][0],
diagonal_extremes[1][1] - diagonal_extremes[0][1])
v = (point[0] - diagonal_extremes[0][0],
point[1] - diagonal_extremes[0][1])
return np.cross(u, v) / np.linalg.norm(u)
print(f"diagonal d: {get_diagonal_distance((M, I), O): .4f}")
print(f"diagonal d: {distance_from_diagonal((M, I), O): .4f}")
I have an implicit function, and I'm trying to plot the derivative of the solution of this function.
The function is:
p = \[Phi]/(1/\[Beta]) + ((1 - \[Phi]) \[Phi] (1 - \[Lambda]) \
\[Beta])/(((W - A) 1/\[Beta] + \[Phi] A/
p) 1/\[Beta] + \[Phi]*(1 - \[Phi]) A/p)/(
1/\[Beta] (\[Lambda]/((W - A) 1/\[Beta] + \[Phi] (A/
p) ) + (1 - \[Lambda])/(((W - A) 1/\[Beta] + \[Phi] A/
p) 1/\[Beta] + \[Phi] (1 - \[Phi]) A/p)))
And I would like to plot the derivative of the following expression w.r.t \[Phi]
\[Phi]/p ((1-\[Phi]) A + 1/\[Beta] A )
I've been trying to first solve for p explicitly, and plug the solution into the expression above and plot the derivative of the expression, but kept getting an error. My code is :
Manipulate[
ans = p /.
Solve[p - \[Phi]/(
1/\[Beta]) - ((1 - \[Phi]) \[Phi] (1 - \[Lambda]) \[Beta])/(((W \
- A) 1/\[Beta] + \[Phi] A/p) 1/\[Beta] + \[Phi]*(1 - \[Phi]) A/p)/(
1/\[Beta] (\[Lambda]/((W - A) 1/\[Beta] + \[Phi] (A/
p) ) + (1 - \[Lambda])/(((W - A) 1/\[Beta] + \[Phi] A/
p) 1/\[Beta] + \[Phi] (1 - \[Phi]) A/p))) == 0, p],
Plot[Evaluate[
D[f[\[Phi]] == \[Phi]/
ans[[2]] ((1 - \[Phi]) A + 1/\[Beta] A), \[Phi]]], {\[Phi],
0.01, 1}], {A, 10, 500}, {\[Beta], 0.001, 1}, {W, 100,
10^9}, {\[Lambda], 0.01, 1}]
The error I'm getting is:
"Manipulate:Manipulate argument \
Plot[Evaluate[\!\(\*SubscriptBox[\(\[PartialD]\), \(\[Phi]\)]\((f[\
\[Phi]] == \((\[Phi]\\\ Power[<<2>>])\)\\\ \((Times[<<2>>] +
Times[<<2>>])\))\)\)],{\[Phi],0.01,1}] does not have the correct \
form for a variable specification"
What am I doing wrong?
Thank you!
Can I simply ask the logical flow of the below Mathematica code? What are the variables arg and abs doing? I have been searching for answers online and used ToMatlab but still cannot get the answer. Thank you.
Code:
PositiveCubicRoot[p_, q_, r_] :=
Module[{po3 = p/3, a, b, det, abs, arg},
b = ( po3^3 - po3 q/2 + r/2);
a = (-po3^2 + q/3);
det = a^3 + b^2;
If[det >= 0,
det = Power[Sqrt[det] - b, 1/3];
-po3 - a/det + det
,
(* evaluate real part, imaginary parts cancel anyway *)
abs = Sqrt[-a^3];
arg = ArcCos[-b/abs];
abs = Power[abs, 1/3];
abs = (abs - a/abs);
arg = -po3 + abs*Cos[arg/3]
]
]
abs and arg are being reused multiple times in the algorithm.
In a case where det > 0 the steps are
po3 = p/3;
b = (po3^3 - po3 q/2 + r/2);
a = (-po3^2 + q/3);
abs1 = Sqrt[-a^3];
arg1 = ArcCos[-b/abs1];
abs2 = Power[abs1, 1/3];
abs3 = (abs2 - a/abs2);
arg2 = -po3 + abs3*Cos[arg1/3]
abs3 can be identified as A in this answer: Using trig identity to a solve cubic equation
That is the most salient point of this answer.
Evaluating symbolically and numerically may provide some other insights.
Using demo inputs
{p, q, r} = {-2.52111798, -71.424692, -129.51520};
Copyable version of trig identity notes - NB a, b, p & q are used differently in this post
Plot[x^3 - 2.52111798 x^2 - 71.424692 x - 129.51520, {x, 0, 15}]
a = 1;
b = -2.52111798;
c = -71.424692;
d = -129.51520;
p = (3 a c - b^2)/3 a^2;
q = (2 b^3 - 9 a b c + 27 a^2 d)/27 a^3;
A = 2 Sqrt[-p/3]
A == abs3
-(b/3) + A Cos[1/3 ArcCos[
-((b/3)^3 - (b/3) c/2 + d/2)/Sqrt[-(-(b^2/9) + c/3)^3]]]
Edit
There is also a solution shown here
TRIGONOMETRIC SOLUTION TO THE CUBIC EQUATION, by Alvaro H. Salas
Clear[a, b, c]
1/3 (-a + 2 Sqrt[a^2 - 3 b] Cos[1/3 ArcCos[
(-2 a^3 + 9 a b - 27 c)/(2 (a^2 - 3 b)^(3/2))]]) /.
{a -> -2.52111798, b -> -71.424692, c -> -129.51520}
10.499
I have a non-grid-aligned set of input values associated with grid-aligned output values. Given a new input value I want to find the output:
(These are X,Y coordinates, calibrating an imprecise not-square eye-tracking input device to exact locations on screen.)
This looks like Bilinear Interpolation, but my input values are not grid-aligned. Given an input, how can I figure out a reasonable output value?
Answer: In this case where I have sets of input and output points, what is actually needed is to perform inverse bilinear interpolation to find the U,V coordinates of the input point within the quad, and then perform normal bilinear interpolation (as described in Nico's answer below) on the output quad using those U,V coordinates.
You can bilinearly interpolate in any convex tetragon. A cartesian grid is just a bit simpler because the calculation of interpolation parameters is trivial. In the general case you interpolate as follows:
parameters alpha, beta
interpolated value = (1 - alpha) * ((1 - beta) * p1 + beta * p2) + alpha * ((1 - beta) * p3 + beta * p4)
In order to calculate the parameters, you have to solve a system of equations. Put your input values in the places of p1 through p4 and solve for alpha and beta.
Then put your output values in the places of p1 through p4 and use the calculated parameters to calculate the final interpolated output value.
For a regular grid, the parameter calculation comes down to:
alpha = x / cell width
beta = y / cell height
which automatically solves the equations.
Here is a sample interpolation for alpha=0.3 and beta=0.6
Actually, the equations can be solved analytically. However, the formulae are quite ugly. Therefore, iterative methods are probably nicer. There are two solutions for the system of equations. You need to pick the solution where both parameters are in [0, 1].
First solution:
alpha = -(b e - a f + d g - c h + sqrt(-4 (c e - a g) (d f - b h) +
(b e - a f + d g - c h)^2))/(2 c e - 2 a g)
beta = (b e - a f - d g + c h + sqrt(-4 (c e - a g) (d f - b h) +
(b e - a f + d g - c h)^2))/(2 c f - 2 b g)
where
a = -p1.x + p3.x
b = -p1.x + p2.x
c = p1.x - p2.x - p3.x + p4.x
d = interpolated_point.x - p1.x
e = -p1.y + p3.y
f = -p1.y + p2.y
g = p1.y - p2.y - p3.y + p4.y
h = interpolated_point.y - p1.y
Second solution:
alpha = (-b e + a f - d g + c h + sqrt(-4 (c e - a g) (d f - b h) +
(b e - a f + d g - c h)^2))/(2 c e - 2 a g)
beta = -((-b e + a f + d g - c h + sqrt(-4 (c e - a g) (d f - b h) +
(b e - a f + d g - c h)^2))/( 2 c f - 2 b g))
Here's my own technique, along with code for deriving the resulting value. It requires three lerps of the output values (and three percentage calculations to determine the lerp percentages):
Note that this is not bilinear interpolation. It does not remap the quad of input points to the quad of output values, as some input points can result in output values outside the output quad.
Here I'm showing the non-aligned input values on a Cartesian plane (using the sample input values from the question above, multiplied by 10 for simplicity).
To calculate the 'north' point (top green dot), we calculate the percentage across the X axis as
(inputX - northwestX) / (northeastX - northwestX)
= (-4.2 - -19) / (10 - -19)
= 0.51034
We use this percentage to calculate the intercept at the Y axis by lerping between the top Y values:
(targetValue - startValue) * percent + startValue
= (northeastY - northwestY) * percent + northwestY
= (-8 - -7) * 0.51034 + -7
= -7.51034
We do the same on the 'south' edge:
(inputX - southwestX) / (southeastX - southwestX)
= (-4.2 - -11) / (9 - -11)
= 0.34
(southeastY - southwestY) * percent + southwestY
= (7 - 4) * 0.34 + 4
= 5.02
Finally, we use these two values to calculate the final percentage between the north and south edges:
(inputY - southY) / (northY - southY)
= (1 - 5.02) / (-7.51034 - 5.02)
= 0.3208
With these three percentages in hand we can calculate our final output values by lerping between the points:
nw = Vector(-150,-100)
ne = Vector( 150,-100)
sw = Vector(-150, 100)
se = Vector( 150, 100)
north = lerp( nw, ne, 0.51034) --> ( 3.10, -100.00)
south = lerp( sw, se, 0.34) --> (-48.00, 100.00)
result = lerp( south, north, 0.3208) --> (-31.61, 35.84)
Finally, here is some (Lua) code performing the above. It uses a mutable Vector object that supports the ability to copy values from another vector and lerp its values towards another vector.
-- Creates a bilinear interpolator
-- Corners should be an object with nw/ne/sw/se keys,
-- each of which holds a pair of mutable Vectors
-- { nw={inp=vector1, out=vector2}, … }
function tetragonalBilinearInterpolator(corners)
local sides = {
n={ pt=Vector(), pts={corners.nw, corners.ne} },
s={ pt=Vector(), pts={corners.sw, corners.se} }
}
for _,side in pairs(sides) do
side.minX = side.pts[1].inp.x
side.diff = side.pts[2].inp.x - side.minX
end
-- Mutates the input vector to hold the result
return function(inpVector)
for _,side in pairs(sides) do
local pctX = (inpVector.x - side.minX) / side.diff
side.pt:copyFrom(side.pts[1].inp):lerp(side.pts[2].inp,pctX)
side.inpY = side.pt.y
side.pt:copyFrom(side.pts[1].out):lerp(side.pts[2].out,pctX)
end
local pctY = (inpVector.y-sides.s.inpY)/(sides.n.y-sides.s.inpY)
return inpVector:copyFrom(sides.s.pt):lerp(sides.n.pt,pctY)
end
end
local interp = tetragonalBilinearInterpolator{
nw={ inp=Vector(-19,-7), out=Vector(-150,-100) },
ne={ inp=Vector( 10,-8), out=Vector( 150,-100) },
sw={ inp=Vector(-11, 4), out=Vector(-150, 100) },
se={ inp=Vector( 9, 7), out=Vector( 150, 100) }
}
print(interp(Vector(-4.2, 1))) --> <-31.60 35.84>
I'm trying to minimize a non-linear function of four variables with some linear constraints. Mathematica 8 is unable to find a good solution giving complex values of the function at some point in the iteration. This implies that one or some contraints are not being enabled in the process. Is this a bug or limitation of the optimization function ?
Function to minimize is
ff[lxw_, lwz_, c_, d_] := - J1 (lxw + lwz) - 2 J2 c +
T (-Log[2] - 1/2 (1 - lxw) Log[(1 - lxw)/4] -
1/2 (1 + lxw) Log[(1 + lxw)/4] -
1/2 (1 - lwz) Log[(1 - lwz)/4] -
1/2 (1 + lwz) Log[(1 + lwz)/4] + 1/2 (1 - d) Log[(1 - d)/16] +
1/8 (1 + 2 c + d - 2 lwz - 2 lxw) Log[
1/16 (1 + 2 c + d - 2 lwz - 2 lxw)])
where
T = 10;
J1 = 1;
J2 = -0.2;
are constant parameters. Then I try
NMinimize[{ff[lxw, lwz, c, d],
2 c + d - 2 lwz - 2 lxw >= -0.999 &&
-0.999 <= lxw <= 0.999 &&
-0.999 <= lwz <= 0.999 &&
-0.999 <= c <= 0.999 &&
d <= 0.9999}, {lxw, lwz, c, d}]
with the result
NMinimize::nrnum: "The function value 5.87777[VeryThinSpace]-4.87764\ I\n
is not a real number at {c,d,lwz,lxw} = {-0.718817,-1.28595,0.69171,-0.932461}.
I would appreciate if someone can give a hint at what is happening here.
Try this:
Clear[ff];
ff[lxw_, lwz_, c_, d_] /; 2 c + d - 2 lwz - 2 lxw >= -0.999 :=
< your function def >
This will cause the cause the function to be unevaluated in case NMinimize takes an excursion out of bounds. Sorry i cant test this from here.. If that doesn't do try asking on mathematica.stackexchange.com
Aside, why use <=.999 instead of simply < 1 ?
It just might help if you fix that too ( use integer 1, not 1. )
The warning is appearing because at the values given in the warning the last term in ff is complex, due to taking the log of a negative number, i.e.
{c, d, lwz, lxw} = {
-0.7188174745559741`,
-1.2859482844800894`,
0.6917100913968041`,
-0.9324611085040573`};
Log[1/16 (1 + 2 c + d - 2 lwz - 2 lxw)]
-2.5558 + 3.14159 i
1/16 (1 + 2 c + d - 2 lwz - 2 lxw)
-0.0776301
In Mathematica 9 a result is produced in addition to the warning :-
{-4.90045, {c -> 0.94425, d -> -0.315633, lwz -> 0.900231, lxw -> -0.191476}}
I.e.
{c, d, lwz, lxw} = {
0.9442497691706085`,
-0.31563295950647885`,
0.900230825707721`,
-0.1914760216875171`};
ff[lxw, lwz, c, d]
-4.90045