How to translate based on origin / world space with homogenous matrix - matrix

I don't know if this is the right place to ask this or not, since what I'm trying to ask is maybe the basic principle (and its implementation) rather than code specific. I apologize in advance.
Currently, I'm trying to make my own 2D engine using SDL & C++ with my limited knowledge and rusty linear algebra. I'm currently stuck on the transformation part. I've coded my own vectors and matrices class (Vector2, Vector3, Matrix2x2, Matrix3x3). And derived Matrix3x3 into a Transform class to hold the transform of the object in the scene. So, to get the position, it's from these x= elements[0][2], y = elements[1][2], while the angle is from atan(elements[1][0], elements[0][0]).
Now suppose I have this transform for the object:
| 0.86602540378 -0.5 50 |
| 0.5 0.86602540378 -70 |
| 0 0 1 |
Or position = 50,-70 ; rotation = 30 degree.
Now if I have a translation matrix of:
| 1 0 40 |
| 0 1 20 |
| 0 0 1 |
How do I translate the object not based on the relative space of its rotation but based on the world / global space? So that the final transform of the object would be like this:
| 0.86602540378 -0.5 90 |
| 0.5 0.86602540378 -50 |
| 0 0 1 |
Thanks in advance.

In this case, try applying left-multiplication. As you're probably aware, matrix multiplication is not commutative in general. AB is not the same as BA. If you multiply with the translation matrix on the left and the object transform on the right, you'll likely get it.

Related

Solving a constrained system of linear equations

I have a system of equations of the form y=Ax+b where y, x and b are n×1 vectors and A is a n×n (symmetric) matrix.
So here is the wrinkle. Not all of x is unknown. Certain rows of x are specified and the corresponding rows of y are unknown. Below is an example
| 10 | | 5 -2 1 | | * | | -1 |
| * | = | -2 2 0 | | 1 | + | 1 |
| 1 | | 1 0 1 | | * | | 2 |
where * designates unknown quantities.
I have built a solver for problems such as the above in Fortran, but I wanted to know if there is a decent robust solver out-there as part of Lapack or MLK for these types of problems?
My solver is based on a sorting matrix called pivot = [1,3,2] which rearranges the x and y vectors according to known and unknown
| 10 | | 5 1 -2 | | * | | -1 |
| 1 | | 1 1 0 | | * | + | 2 |
| * | | -2 0 2 | | 1 | | 1 |
and the solving using a block matrix solution & LU decomposition
! solves a n×n system of equations where k values are known from the 'x' vector
function solve_linear_system(A,b,x_known,y_known,pivot,n,k) result(x)
use lu
integer(c_int),intent(in) :: n, k, pivot(n)
real(c_double),intent(in) :: A(n,n), b(n), x_known(k), y_known(n-k)
real(c_double) :: x(n), y(n), r(n-k), A1(n-k,n-k), A3(n-k,k), b1(n-k)
integer(c_int) :: i, j, u, code, d, indx(n-k)
u = n-k
!store known `x` and `y` values
x(pivot(u+1:n)) = x_known
y(pivot(1:u)) = y_known
!define block matrices
! |y_known| = | A1 A3 | | * | + |b1|
| | * | = | A3` A2 | | x_known | |b2|
A1 = A(pivot(1:u), pivot(1:u))
A3 = A(pivot(1:u), pivot(u+1:n))
b1 = b(pivot(1:u))
!define new rhs vector
r = y_known -matmul(A3, x_known)-b1
% solve `A1*x=r` with LU decomposition from NR book for 'x'
call ludcmp(A1,u,indx,d,code)
call lubksb(A1,u,indx,r)
% store unknown 'x' values (stored into 'r' by 'lubksb')
x(pivot(1:u)) = r
end function
For the example above the solution is
| 10.0 | | 3.5 |
y = | -4.0 | x = | 1.0 |
| 1.0 | | -4.5 |
PS. The linear systems have typically n<=20 equations.
The problem with only unknowns is a linear least squares problem.
Your a-priori knowledge can be introduced with equality-constraints (fixing some variables), transforming it to an linear equality-constrained least squares problem.
There is indeed an algorithm within lapack solving the latter, called xGGLSE.
Here is some overview.
(It also seems, you need to multiply b with -1 in your case to be compatible with the definition)
Edit: On further inspection, i missed the unknowns within y. Ouch. This is bad.
First, i would rewrite your system into a AX=b form where A and b are known. In your example, and provided that i didn't make any mistakes, it would give :
5 0 1 x1 13
A = 2 1 0 X = x2 and b = 3
1 0 1 x3 -1
Then you can use plenty of methods coming from various libraries, like LAPACK or BLAS depending on the properties of your matrix A (positive-definite ,...). As a starting point, i would suggest a simple method with a direct inversion of the matrix A, especially if your matrix is small. There are also many iterative approach ( Jacobi, Gradients, Gauss seidel ...) that you can use for bigger cases.
Edit : An idea to solve it in 2 steps
First step : You can rewrite your system in 2 subsystem that have X and Y as unknows but dimension are equals to the numbers of unknowns in each vector.
The first subsystem in X will be AX = b which can be solved by direct or iterative methods.
Second step : The second system in Y can be directly resolved once you know X cause Y will be expressed in the form Y = A'X + b'
I think this approach is more general.

Use dc.seriesChart as a child chart to dc.compositeChart

I need to draw a complex chart with two Y axes and several series bounded to each Y axis. Besides, X values for each series are different. So I tried to create two seriesCharts and to compose them by one compositeChart. And i had an exception that seemed to be impossible to use seriesChart as a child chart. Is it true? Is there another solution? Thanks.
I have data structure like that:
{id1 = 0 id2 = 0 value = 5,95796 date = 5.24.15 0:00}
0 | 0 | 5,83062 | 5.24.15 0:01
5 | 0 | 24757 | 5.24.15 4:21
5 | 0 | 24638 | 5.24.15 4:22
9 | 1 | 391,6 | 5.24.15 9:00
9 | 1 | 391,6 | 5.24.15 9:31
First id is the number of the series, second id (0 or 1) tells what Y axis should be used. 'Value' is Y value, date is X. Dataset is very large and up to 40000 records can be displayed for each series. I'm new with dc.js and have really broken my neck figuring out how to manage it. Maybe i'm going the wrong way. I really appreciate any advice!

Kernel density estimation julia

I am trying to implement a kernel density estimation. However my code does not provide the answer it should. It is also written in julia but the code should be self explanatory.
Here is the algorithm:
where
So the algorithm tests whether the distance between x and an observation X_i weighted by some constant factor (the binwidth) is less then one. If so, it assigns 0.5 / (n * h) to that value, where n = #of observations.
Here is my implementation:
#Kernel density function.
#Purpose: estimate the probability density function (pdf)
#of given observations
##param data: observations for which the pdf should be estimated
##return: returns an array with the estimated densities
function kernelDensity(data)
|
| #Uniform kernel function.
| ##param x: Current x value
| ##param X_i: x value of observation i
| ##param width: binwidth
| ##return: Returns 1 if the absolute distance from
| #x(current) to x(observation) weighted by the binwidth
| #is less then 1. Else it returns 0.
|
| function uniformKernel(x, observation, width)
| | u = ( x - observation ) / width
| | abs ( u ) <= 1 ? 1 : 0
| end
|
| #number of observations in the data set
| n = length(data)
|
| #binwidth (set arbitraily to 0.1
| h = 0.1
|
| #vector that stored the pdf
| res = zeros( Real, n )
|
| #counter variable for the loop
| counter = 0
|
| #lower and upper limit of the x axis
| start = floor(minimum(data))
| stop = ceil (maximum(data))
|
| #main loop
| ##linspace: divides the space from start to stop in n
| #equally spaced intervalls
| for x in linspace(start, stop, n)
| | counter += 1
| | for observation in data
| | |
| | | #count all observations for which the kernel
| | | #returns 1 and mult by 0.5 because the
| | | #kernel computed the absolute difference which can be
| | | #either positive or negative
| | | res[counter] += 0.5 * uniformKernel(x, observation, h)
| | end
| | #devide by n times h
| | res[counter] /= n * h
| end
| #return results
| res
end
#run function
##rand: generates 10 uniform random numbers between 0 and 1
kernelDensity(rand(10))
and this is being returned:
> 0.0
> 1.5
> 2.5
> 1.0
> 1.5
> 1.0
> 0.0
> 0.5
> 0.5
> 0.0
the sum of which is: 8.5 (The cumulative distibution function. Should be 1.)
So there are two bugs:
The values are not properly scaled. Each number should be around one tenth of their current values. In fact, if the number of observation increases by 10^n n = 1, 2, ... then the cdf also increases by 10^n
For example:
> kernelDensity(rand(1000))
> 953.53
They don't sum up to 10 (or one if it were not for the scaling error). The error becomes more evident as the sample size increases: there are approx. 5% of the observations not being included.
I believe that I implemented the formula 1:1, hence I really don't understand where the error is.
I'm not an expert on KDEs, so take all of this with a grain of salt, but a very similar (but much faster!) implementation of your code would be:
function kernelDensity{T<:AbstractFloat}(data::Vector{T}, h::T)
res = similar(data)
lb = minimum(data); ub = maximum(data)
for (i,x) in enumerate(linspace(lb, ub, size(data,1)))
for obs in data
res[i] += abs((obs-x)/h) <= 1. ? 0.5 : 0.
end
res[i] /= (n*h)
end
sum(res)
end
If I'm not mistaken, the density estimate should integrate to 1, that is we would expect kernelDensity(rand(100), 0.1)/100 to get at least close to 1. In the implementation above I'm getting there, give or take 5%, but then again we don't know that 0.1 is the optimal bandwith (using h=0.135 instead I'm getting there to within 0.1%), and the uniform Kernel is known to only be about 93% "efficient".
In any case, there's a very good Kernel Density package in Julia available here, so you probably should just do Pkg.add("KernelDensity") instead of trying to code your own Epanechnikov kernel :)
To point out the mistake: You have n bins B_i of size 2h covering [0,1], a random point X lands in expected number of bins. You divide by 2 n h.
For n points, the expected value of your function is .
Actually, you have some bins of size < 2h. (for example if start = 0, half of first the bin is outside of [0,1]), factoring this in gives the bias.
Edit: Btw, the bias is easy to calculate if you assume that the bins have random locations in [0,1]. Then the bins are on average missing h/2 = 5% of their size.

Determine max slope of slowly descending signal

I have an analog power signal from a motor. The signal ramps up quickly, but powers off slowly over the course of several seconds. The signal looks almost like a series of plateaus on the descent. The problem is that the signal doesn't settle back to zero. It settles back to an intermediate level unknown, and varying from motor to motor. See chart below.
I'm trying to find a way determine when the motor is off and at that intermediate level.
My thought is to find and store the max point, and calculate the slopes thereafter until the max slope is greater than some large negative slope value like -160 (~ -60 degrees), and declare that the motor must be powering off. The sample points below are with all duplicates removed. (there's about 5000 samples typically).
My problem is determining the X values. In the formula (y2-y1) / (x2 - x1), the x values could far enough away in time that the slope never appears greater than -30 degrees. Picking an absolute number like 10 would fix this, but is there a more mathematically correct method?
The data shows me calculating slope with method described above and the max of 921. ie (y2 -y1) / ( (10+1) - 10). In this scheme, at datapoint 9, i would say the motor is "Off". I'm looking for a more precise means to determine an X value rather than randomly picking 10 for instance.
+---+-----+----------+
| X | Y | Slope |
+---+-----+----------+
| 1 | 65 | 856.000 |
| 2 | 58 | 863.000 |
| 3 | 57 | 864.000 |
| 4 | 638 | 283.000 |
| 5 | 921 | 0.000 |
| 6 | 839 | -82.000 |
| 7 | 838 | -83.000 |
| 8 | 811 | -110.000 |
| 9 | 724 | -197.000 |
+---+-----+----------+
EDIT: A much simpler answer:
Since your motor is either ON or OFF, and ON wattages are strictly higher than OFF wattages, you should be able to discriminate between ON and OFF wattages by maintaining an average wattage, reporting ON if the current measurement is higher than the average and OFF if it is lower.
Count = 0
Average = 500
Whenever a measurement comes in,
Count = Count + 1
Average = Average + (Measurement - Average) / Count
Return Measurement > Average ? ON : OFF
This represents an average of all the values the wattage has ever been. If we want to eventually "forget" the earliest values (before the motor was ever turned on), we could either keep a buffer of recent values and use that for a moving average, or approximate a moving average with an IIR like
Average = (1-X) * Average + X * Measurement
for some X between 0 and 1 (closer to 0 to change more slowly).
Original answer:
You could treat this as an online clustering problem, where you expect three clusters (before the motor turns on, when the motor is on, and when the motor is turned off), or perhaps four (before the motor turns on, peak power, when the motor is running normally, and when the motor turns off). In effect, you're trying to learn what it looks like when a motor is on (or off).
If you don't have any other information about whether the motor is on or off (which could be used to train a model), here's a simple approach:
Define an "Estimate" to contain:
float Value
int Count
Define an "Estimator" to contain:
float TotalError = 0.0
Estimate COLD_OFF = {Value = 0, Count = 1}
Estimate ON = {Value = 1000, Count = 1}
Estimate WARM_OFF = {Value = 500, Count = 1}
a function Update_Estimate(float Measurement)
Find the Estimate E such that E.Value is closest to Measurement
Update TotalError = TotalError + (E.Value - Measurement)*(E.Value - Measurement)
Update E.Value = (E.Value * E.Count + P) / (E.Count + 1)
Update E.Count = E.Count + 1
return E
This takes initial guesses for what the wattages of these stages should be and updates them with the measurements. However, this has some problems. What if our initial guesses are off?
You could initialize some number of Estimators with different possible (e.g. random) guesses for COLD_OFF, ON, and WARM_OFF; after receiving a measurement, let each Estimator update itself and aggregate their values somehow. This aggregation should reward the better estimates. Since you're storing TotalError for each estimate, you could just pick the output of the Estimator that has the lowest TotalError so far, or you could let the Estimators vote (giving each Estimator's vote a weight proportional to 1/(TotalError + 1) or something like that).

Calculate spin between two Euler-angles along it's local Y-axes

Let's say we have two rotated objects, there Euler-Angles are:
Object | x | y | z
1 | 180 | 360 | 180
2 | -360 | -720 | 360
Both use rotate order XYZ. When rotation is zero the local Y-axis is pointing up.
I'm trying to get the difference in Spins around their local Y-axis. As if there would be a string between the bottoms of Object 1 and Object 2 connected when all orientations were 0,0,0. How many times would the string have spun around / twisted?
Some examples:
#1 | 0, 360, 0
#2 | 0, 0, 0
1 full twist
#1 | 0, 180, 0
#2 | 0, 0, 0
1/2 twist
#1 | 360, 0, 0
#2 | 0, 0, 0
1 twist. (think about the string that was attached to it, this would also count as a twist in the string)
--
I've been looking into orientation/rotation and it's different ways of using them, like Quaternions, Euler-Angles and Axis-Angle. I feel like I know how each work in general yet miss the skills for solving this.
Any ideas on how to solve this?

Resources