15 digit floating variable calculation in microcontroller - algorithm

I want to calculate an equation within a controller(Arduino)
y = -0.0000000104529251928664x^3 + 0.0000928316793270531x^2 - 0.282333029643959x + 297.661280719026
Now the decimal values of the coefficients are important because "x" varies in thousands so cube term cannot be ignored. I have tried manipulating the equation in excel to reduce the coefficients but R^2 is lost in the process and I would like to avoid that.
Max variable size available in Arduino is 4byte. And on google search, I was not able to find an appropriate solution.
Thank you for your time.

Since
-0.0000000104529251928664 ^ (1/3) = - 0.0021864822
0.0000928316793270531 ^ (1/2) = 0.00963491978
The formula
y = -0.0000000104529251928664x^3 + 0.0000928316793270531x^2 - 0.282333029643959x + 297.661280719026
Can be rewritten:
y = -(0.0021864822 * x)^3 + (0.00963491978 * x)^2 - 0.282333029643959 * x + 297.661280719026
Rounding all coefficients to 10 decimal places, we get:
y = -(0.0021864822 * x)^3 + (0.00963491978 * x)^2 - 0.2823330296 * x + 297.6612807
But I don't know Arduino, I'm not sure what the correct number of decimal places is, nor do I know what the compiler will accept or refuse.

Related

Why does finding the eigenvalues of a 4*4 matrix by z3py take so much time and do not give any solutions?

I'm trying to calculate the eigenvalues of a 4*4 matrix called A in my code (I know that the eigenvalues are real values). All the elements of A are z3 expressions and need to be calculated from the previous constraints. The code below is the last part of a long code that tries to calculate matrix A, then its eigenvalues. The code is written as an entire but I've split it into two separate parts in order to debug it: part 1 in which the code tries to find the matrix A and part 2 which is eigenvalues' calculation. In part 1, the code works very fast and calculates A in less than a sec, but when I add part 2 to the code, it doesn't give me any solutions after.
I was wondering what could be the reason? Is it because of the order of the polynomial (which is 4) or what? I would appreciate it if anyone can help me find an alternative way to calculate the eigenvalues or give me some hints on how to rewrite the code so it can solve the problem.
(Note that A2 in the actusl code is a matrix with all of its elements as z3 expressions defined by previous constraints in the code. But, here I've defined the elements as real values just to make the code executable. In this way, the code gives a solution so fast but in the real situation it takes so long, like days.
for example, one of the elements of A is almost like this:
0 +
1*Vq0__1 +
2 * -Vd0__1 +
0 +
((5.5 * Iq0__1 - 0)/64/5) *
(0 +
0 * (Vq0__1 - 0) +
-521702838063439/62500000000000 * (-Vd0__1 - 0)) +
((.10 * Id0__1 - Etr_q0__1)/64/5) *
(0 +
521702838063439/62500000000000 * (Vq0__1 - 0) +
0.001 * (-Vd0__1 - 0)) +
0 +
0 + 0 +
0 +
((100 * Iq0__1 - 0)/64/5) * 0 +
((20 * Id0__1 - Etr_q0__1)/64/5) * 0 +
0 +
-5/64
All the variables in this example are z3 variables.)
from z3 import *
import numpy as np
def sub(*arg):
counter = 0
for matrix in arg:
if counter == 0:
counter += 1
Sub = []
for i in range(len(matrix)):
Sub1 = []
for j in range(len(matrix[0])):
Sub1 += [matrix[i][j]]
Sub += [Sub1]
else:
row = len(matrix)
colmn = len(matrix[0])
for i in range(row):
for j in range(colmn):
Sub[i][j] = Sub[i][j] - matrix[i][j]
return Sub
Landa = RealVector('Landa', 2) # Eigenvalues considered as real values
LandaI0 = np.diag( [ Landa[0] for i in range(4)] ).tolist()
ALandaz3 = RealVector('ALandaz3', 4 * 4 )
############# Building ( A - \lambda * I ) to find the eigenvalues ############
A2 = [[1,2,3,4],
[5,6,7,8],
[3,7,4,1],
[4,9,7,1]]
s = Solver()
for i in range(4):
for j in range(4):
s.add( ALandaz3[ 4 * i + j ] == sub(A2, LandaI0)[i][j] )
ALanda = [[ALandaz3[0], ALandaz3[1], ALandaz3[2], ALandaz3[3] ],
[ALandaz3[4], ALandaz3[5], ALandaz3[6], ALandaz3[7] ],
[ALandaz3[8], ALandaz3[9], ALandaz3[10], ALandaz3[11]],
[ALandaz3[12], ALandaz3[13], ALandaz3[14], ALandaz3[15] ]]
Determinant = (
ALandaz3[0] * ALandaz3[5] * (ALandaz3[10] * ALandaz3[15] - ALandaz3[14] * ALandaz3[11]) -
ALandaz3[1] * ALandaz3[4] * (ALandaz3[10] * ALandaz3[15] - ALandaz3[14] * ALandaz3[11]) +
ALandaz3[2] * ALandaz3[4] * (ALandaz3[9] * ALandaz3[15] - ALandaz3[13] * ALandaz3[11]) -
ALandaz3[3] * ALandaz3[4] * (ALandaz3[9] * ALandaz3[14] - ALandaz3[13] * ALandaz3[10]) )
tol = 0.001
s.add( And( Determinant >= -tol, Determinant <= tol ) ) # giving some flexibility instead of equalling to zero
print(s.check())
print(s.model())
Note that you seem to be using Z3 for a type of equations it absolutely isn't meant for. Z is a sat/smt solver. Such a solver works internally with a huge number of boolean equations. Integers and fractions can be converted to boolean expressions, but with general floats Z3 quickly reaches its limits. See here and here for a lot of typical examples, and note how floats are avoided.
Z3 can work in a limited way with floats, converting them to fractions, but doesn't work with approximations and accuracies as in needed in numerical algorithms. Therefore, the results are usually not what you are hoping for.
Finding eigenvalues is a typical numerical problem, where accuracy issues are very tricky. Python has libraries such as numpy and scipy to efficiently deal with those. See e.g. numpy.linalg.eig.
If, however your A2 matrix contains some symbolic expressions (and uses fractions instead of floats), sympy's matrix functions could be an interesting alternative.

Getting the equation for the line of intersection using Mathematica

I have a nasty expression that I am playing around with on Mathematica.
(-X + (2 X - X^2)/(
2 (-1 + X)^2 ((1 + 2 (-1 + p) X - (-1 + p) X^2)/(-1 + X)^2)^(3/2)))/X
I graphed it along with the plane z = 0 where X and p are both restricted from 0 to 1:
Plot3D[{nasty equation is here, 0}, {p , 0, 1}, {X, 0, 1}]
I decided it would be interesting to obtain the equation for the intersection of the plane generated from the nasty equation and z = 0. So I used solve:
Solve[{that nasty equation == 0}, {p, X}, Reals]
and the output was even nastier with some results having the # symbol in it ( I have no idea what it is, and I am new to Mathematica). Is there a way to get an equation for the nice line of intersection between the nasty equation and z = 0 where p and X are restricted from 0 to 1? In the graph generated from Plot3D I see that the line of intersection appears to be some nice looking half parabola looking thing. I would like the equation for that if possible. Thank you!
For complicated nasty equations Reduce is often more powerful and less likely to give you something that you will later find has hidden assumptions inside the result. Notice I include your constraint about the range of p and X to give Reduce the maximum amount of
information that I can to help it produce the simplest possible solution for you.
In[1]:= Reduce[(-X + (2 X-X^2)/(2 (-1 + X)^2 ((1 + 2 (-1 + p) X - (-1 + p) X^2)/
(-1 + X)^2)^(3/2)))/X == 0 && 0 < X < 1 && 0 < p < 1, {X, p}]
Out[1]= 0<X<1 && p == Root[12 - 47*X + 74*X^2 - 59*X^3 + 24*X^4 - 4*X^5 + (-24 +
108*X - 192*X^2 + 168*X^3 - 72*X^4 + 12*X^5)*#1 + (-48*X + 144*X^2 - 156*X^3 +
72*X^4 - 12*X^5)*#1^2 + (-32*X^2 + 48*X^3 - 24*X^4 + 4*X^5)*#1^3 & , 1]
Root is a Mathematica function representing a root of a usually complicated polynomial
that would often be much larger if the actual root were written out in algebra, but we
can see whether the result is understandable enough to be useful by using ToRadicals.
Often Reduce will return several different alternatives using && (and) and || (or) to
let you see the details you must understand to correctly use the result. See how I
copy the entire Root[...] and put that inside ToRadicals. Notice how Reduce returns
answers that include information about the ranges of variables. And see how I give Simplify the domain information about X to allow it to provide the greatest possible simplification.
In[2]:= Simplify[ToRadicals[Root[12 - 47 X + 74 X^2 - 59 X^3 + 24 X^4 - 4 X^5 +
(-24 + 108 X - 192 X^2 + 168 X^3 - 72 X^4 + 12 X^5) #1 + (-48 X + 144 X^2 -
156 X^3 + 72 X^4 - 12 X^5) #1^2 + (-32 X^2 + 48 X^3 - 24 X^4+ 4 X^5)#1^3&,1]],
0 < X < 1]
Out[2]= (8*X - 24*X^2 + 26*X^3 - 12*X^4 + 2*X^5 + 2^(1/3)*(-((-2 + X)^8*(-1 +
X)^2*X^3))^(1/3))/(2*(-2 + X)^3*X^2)
So your desired answer of where z= 0 will be where X is not zero, to avoid 0/0 in
your original equation, and where 0 < X < 1, 0 < p < 1 and where p is a root of that
final complicated expression in X. That result is a fraction and to be a root you
might take a look at where the numerator is zero to see if you can get any more
information about what you are looking for.
Sometimes you can learn something by plotting an expression. If you try to plot that final result you may end up with axes, but no plot. Perhaps the denominator is causing problems. You can try plotting just the numerator. You may again get an empty plot. Perhaps it is your cube root giving complex values. So you can put your numerator inside Re[] and plot that, then repeat that but using Im[]. Those will let you plot just the real and imaginary parts. You are doing this to try to understand where the roots might be. You should be cautious with plots because sometimes, particularly for complicated nasty expressions, the plot can make mistakes or hide desired information from you, but when used with care you can often learn something from this.
And, as always, test this and everything else very carefully to try to make sure that no mistakes have been made. It is too easy to "type some stuff into Mathematica, get some stuff out", think you have the answer and have no idea that there are significant errors hidden.

shoot projectile (straight trajectory) at moving target in 3 dimensions

I already googled for the problem but only found either 2D solutions or formulas that didn't work for me (found this formula that looks nice: http://www.ogre3d.org/forums/viewtopic.php?f=10&t=55796 but seems not to be correct).
I have given:
Vec3 cannonPos;
Vec3 targetPos;
Vec3 targetVelocityVec;
float bulletSpeed;
what i'm looking for is time t such that
targetPos+t*targetVelocityVec
is the intersectionpoint where to aim the cannon to and shoot.
I'm looking for a simple, inexpensive formula for t (by simple i just mean not making many unnecessary vectorspace transformations and the like)
thanks!
The real problem is finding out where in space that the bullet can intersect the targets path. The bullet speed is constant, so in a certain amount of time it will travel the same distance regardless of the direction in which we fire it. This means that it's position after time t will always lie on a sphere. Here's an ugly illustration in 2d:
This sphere can be expressed mathematically as:
(x-x_b0)^2 + (y-y_b0)^2 + (z-z_b0)^2 = (bulletSpeed * t)^2 (eq 1)
x_b0, y_b0 and z_b0 denote the position of the cannon. You can find the time t by solving this equation for t using the equation provided in your question:
targetPos+t*targetVelocityVec (eq 2)
(eq 2) is a vector equation and can be decomposed into three separate equations:
x = x_t0 + t * v_x
y = y_t0 + t * v_y
z = z_t0 + t * v_z
These three equations can be inserted into (eq 1):
(x_t0 + t * v_x - x_b0)^2 + (y_t0 + t * v_y - y_b0)^2 + (z_t0 + t * v_z - z_b0)^2 = (bulletSpeed * t)^2
This equation contains only known variables and can be solved for t. By assigning the constant part of the quadratic subexpressions to constants we can simplify the calculation:
c_1 = x_t0 - x_b0
c_2 = y_t0 - y_b0
c_3 = z_t0 - z_b0
(v_b = bulletSpeed)
(t * v_x + c_1)^2 + (t * v_y + c_2)^2 + (t * v_z + c_3)^2 = (v_b * t)^2
Rearrange it as a standard quadratic equation:
(v_x^2+v_y^2+v_z^2-v_b^2)t^2 + 2*(v_x*c_1+v_y*c_2+v_z*c_3)t + (c_1^2+c_2^2+c_3^2) = 0
This is easily solvable using the standard formula. It can result in zero, one or two solutions. Zero solutions (not counting complex solutions) means that there's no possible way for the bullet to reach the target. One solution will probably happen very rarely, when the target trajectory intersects with the very edge of the sphere. Two solutions will be the most common scenario. A negative solution means that you can't hit the target, since you would need to fire the bullet into the past. These are all conditions you'll have to check for.
When you've solved the equation you can find the position of t by putting it back into (eq 2). In pseudo code:
# setup all needed variables
c_1 = x_t0 - x_b0
c_2 = y_t0 - y_b0
c_3 = z_t0 - z_b0
v_b = bulletSpeed
# ... and so on
a = v_x^2+v_y^2+v_z^2-v_b^2
b = 2*(v_x*c_1+v_y*c_2+v_z*c_3)
c = c_1^2+c_2^2+c_3^2
if b^2 < 4*a*c:
# no real solutions
raise error
p = -b/(2*a)
q = sqrt(b^2 - 4*a*c)/(2*a)
t1 = p-q
t2 = p+q
if t1 < 0 and t2 < 0:
# no positive solutions, all possible trajectories are in the past
raise error
# we want to hit it at the earliest possible time
if t1 > t2: t = t2
else: t = t1
# calculate point of collision
x = x_t0 + t * v_x
y = y_t0 + t * v_y
z = z_t0 + t * v_z

Implementing semi-implicit backward Euler in a 1-DOF mass-spring system

I have a simple (mass)-spring system wih two points which are connected with a spring. One point is fixed at a ceiling, so I want to calculate the position of the second point using a numerical method. So, basically I get the position of the second point and it's velocity, and want to know how these two value update after one timestep.
The following forces take effect on the point:
Gravitational force, given by -g * m
Spring force, given by k * (l - L) with k being the stiffness, l being the current length and L being the initial length
Damping force, given by -d * v
Summed up, this leads to
F = -g * m + k * (l - L)
Fd = -d * v
Applying for example Explicit Euler, one can derive the following:
newPos = oldPos + dt * oldVelocity
newVelocity = oldVelocity + dt * (F + Fd) / m, using F = m * a.
However, I now want to use semi-implicit backward Euler, but can't exactly figure out where to derive the Jacobians from etc.
So it's probably easiest to see how this goes from considering the fully implicit method first, then going to the semi-implicit.
Implicit Euler would have (let's call these eqn (1)):
newPos = oldPos + dt * newVelocity
newVelocity = oldVelocity + dt * (-g * m + k*(newPos - L) - d*newVelocity)/m
For now let's just measure positions relative to L so we can get rid of that -kL term. Rearranging we end up with
(newPos, newVelocity) - dt * (newVelocity, k/m newPos - d/m newVelocity) = (oldPos, oldVelocity - g*dt)
and putting that into matrix form
((1,-dt),(k/m, 1 - d/m)).(newPos, newVelocity) = (oldPos, oldVelocity -g*dt)
Where you know everything in the matrix, and everything on the RHS, and you just need to solve for the vector (newPos, newVelocity). You can do this with any Ax=b solver (gaussian elimination by hand works in this simple case). But since you mention Jacobians, you're presumably looking to solve this with Newton-Raphson iteration or something similar.
In that case, you're essentially looking to solve the zeros of the equation
((1,-dt),(k/m, 1-d/m)).(newPos, newVelocity) - (oldPos, oldVelocity -g*dt) = 0
which is to say, f(newPos, newVelocity) = (0,0). You have a previous value to use as a starting guess, (oldPos, oldVelocity). Now you just want to iterate on
(x,v)n+1 = (x,v)n + f((x,v)n)/f'((x,v)n)
until you get a sufficiently good answer. Here,
f(newPos,newVel) = ((1,-dt),(k/m, 1-d/m)).(newPos, newVelocity) - (oldPos, oldVelocity -g*dt)
and f'(newPos, newVel) is the Jacobian corresponding the matrix
((1,-dt),(k/m, 1-d/m))
Going through the process for semi-implicit is the same, but a little easier - not all of the RHS terms in eqns (1) are new quantities. The way it's usually done is
newPos = oldPos + dt * newVelocity
newVelocity = oldVelocity + dt * (-g * m + k*oldPos - d*newVelocity)/m
eg, the velocity depends on the old time value of the position, and the position on the new time value of the velocity. (This is very similar to "leapfrog" integration..) You should be able to work through the above steps pretty easilly with this slightly different set of equations. Basically, the k/m term in the matrix above drops away.

Possible ways to calculate X = A - inv(B) * Y * inv(B) and X = Y + A' * inv(B) * A

I have two problems. I have to calculate two equations:
X = A - inv(B) * Y * inv(B)
and
X = Y + A' * inv(B) * A
where, A, B and Y are known p*p matrices (p can be small or large, depends the situation). Matrices are quite dense, without any structure (except B being non-singular of course).
Is it possible to solve X in those equations without inverting the matrix B? I have to calculate these equations n times, n being hundreds or thousands, and all the matrices change over time.
Thank you very much.
If you can express your updates to your matrix B in the following terms:
Bnew = B + u*s*v
then you can express an update to inv(B) explicitly using the Sherman-Morrison-Woodbury formula:
inv(B + u*s*v) = inv(B) - inv(B)*u*inv(s + v*inv(B)*u)*v*inv(B)
If u and v are vectors (column and row, respectively) and s is scalar, then this expression simplifies:
inv(B + u*s*v) = inv(B) - inv(B)*u*v*inv(B)/(s + v*inv(B)*u)
You would only have to calculate inv(B) once and then update it when it changes with no additional inversions.
It may be preferable not to calculate the full inverse, just simple "matrix divisions" on y and (ynew - y) or a and (anew - a) depending on the size of "n" with respect to "p" in your problem.
Memo-ize inv(B), i.e. only invert B when it changes, and keep the inverse around.
If changes to B are small, possibly you could use a delta-approximation.

Resources