Rounding errors in Ruby matrix implementation - ruby

I'm doing a bit 'o matrix algebra in ruby. When testing the results, I'm seeing what I can only assume is a rounding error.
All I'm doing is multiplying 3 matrices, but the values are fairly small:
c_xy:
[0.9702957262759965, 0.012661213742314235, -0.24159035004964077]
[0, 0.9986295347545738, 0.05233595624294383]
[0.24192189559966773, -0.050781354673095955, 0.9689659697053497]
i2k = Matrix[[8.1144E-06, 0.0, 0.0],
[0.0, 8.1144E-06, 0.0],
[0.0, 0.0, 8.1144E-06]]
c_yx:
[0.9702957262759965, 0, 0.24192189559966773]
[0.012661213742314235, 0.9986295347545738, -0.050781354673095955]
[-0.24159035004964077, 0.05233595624294383, 0.9689659697053497]
What I'm trying to do is c_xy * i2k * c_yx. Here's what I expect (this was done in Excel):
8.1144E-06 0 2.11758E-22
0 8.1144E-06 0
2.11758E-22 -5.29396E-23 8.1144E-06
And what I get:
[8.1144e-06, 1.3234889800848443e-23, 6.352747104407253e-22]
[0.0, 8.114399999999998e-06, -5.293955920339377e-23]
[2.117582368135751e-22, 0.0, 8.1144e-06]
As you can see, the first column matches, as does the diagonal. But then (in r,c indexing) (0,1) is wrong (though approaching 0), (0,2) is very wrong, and (1,2) and (2,1) seem to be transposed. I thought it had something to do with the8.1144e-6 value, and tried wrapping it in a BigDecimal to no avail.
Any ideas on places I can look? I'm using the standard Ruby Matrix library
edit
here's the code.
phi1 = 0.24434609527920614
phi2 = 0.05235987755982988
i2k = Matrix[[8.1144E-06, 0.0, 0.0],
[0.0, 8.1144E-06, 0.0],
[0.0, 0.0, 8.1144E-06]]
c_x = Matrix[[1, 0, 0],
[0, Math.cos(phi2), Math.sin(phi2)],
[0, -Math.sin(phi2), Math.cos(phi2)]]
c_y = Matrix[[Math.cos(phi1), 0, -Math.sin(phi1)],
[0, 1, 0],
[Math.sin(phi1), 0, Math.cos(phi1)]]
c_xy = c_y * c_x
c_yx = c_xy.transpose
c_xy * i2k * c_yx

i2k is equal to the identity matrix times 8.1144E-06. This simplifies the answer to:
c_xy * i2k * c_yx = 8.1144E-06 * c_xy * c_yx
However since c_yx = c_xy.transpose and c_xy is a rotation matrix, the transpose of any rotation matrix is its inverse. So c_xy * c_yx is the identity matrix, and thus the exact answer is 8.1144E-06 times the identity matrix.
Here is one way to calculate c_xy * c_yx without using the matrix algebra a priori:
require 'matrix'
require 'pp'
phi1 = 14 * Math::PI/180
phi2 = 3 * Math::PI/180
c_x = Matrix[
[1,0,0],
[0, Math.cos(phi2), Math.sin(phi2) ],
[0, -Math.sin(phi2), Math.cos(phi2) ] ]
c_y = Matrix[
[Math.cos(phi1), 0, -Math.sin(phi1) ],
[0,1,0],
[Math.sin(phi1), 0, Math.cos(phi1) ] ]
c_xy = c_y * c_x
c_yx = c_xy.transpose
product = c_xy * c_yx
pp *product
clone = *product
puts "\nApplying EPSILON:"
product.each_with_index do |e,i,j|
clone[i][j] = 0 if e.abs <= Float::EPSILON
end
pp clone
Output:
[1.0, 0.0, 2.7755575615628914e-17]
[0.0, 0.9999999999999999, -6.938893903907228e-18]
[2.7755575615628914e-17, -6.938893903907228e-18, 0.9999999999999999]
Applying EPSILON:
[1.0, 0, 0]
[0, 0.9999999999999999, 0]
[0, 0, 0.9999999999999999]
which one can then surmise should be the identity matrix. This uses Float::EPSILON which is about 2.220446049250313e-16 in order to set values that have an absolute value no more than this equal to 0. These kinds of approximations are inevitable in floating point calculations; one must evaluate the appropriateness of these approximations on a case-by-case basis.
An alternative is to do symbolic computation where possible rather than numeric.

Floating point numbers have a precision:
puts Float::DIG # => 15
That's the number of decimal digits a Float can have on my, and probably your system. Numbers smaller than 1E-15 can not be represented with a float. You could try BigDecimal for arbitrary large precision.

Related

LightGBM usage of init_score results in no boosting

It seems if lightgbm.train is used with an initial score (init_score) it cannot boost this score.
Here is a simple example:
params = {"learning_rate": 0.1,"metric": "binary_logloss","objective": "binary",
"boosting_type": "gbdt","num_iterations": 5, "num_leaves": 2 ** 2,
"max_depth": 2, "num_threads": 1, "verbose": 0, "min_data_in_leaf": 1}
x = pd.DataFrame([[1, 0.1, 0.3], [1, 0.1, 0.3], [1, 0.1, 0.3],
[0, 0.9, 0.3], [0, 0.9, 0.3], [0, 0.9, 0.3]], columns=["a", "b", "prob"])
y = pd.Series([0, 1, 0, 0, 1, 0])
d_train = lgb.Dataset(x, label=y)
model = lgb.train(params, d_train)
y_pred_default = model.predict(x, raw_score=False)
In the case above, no init_score is used. The predictions are correct:
y_pred_default = [0.33333333, ... ,0.33333333]
d_train = lgb.Dataset(x, label=y, init_score=scipy.special.logit(x["prob"]))
model = lgb.train(params, d_train)
y_pred_raw = model.predict(x, raw_score=True)
In this part, we assume column "prob" from x to be our initial guess (maybe by some other model). We apply logit and use it as initial score. However, the model cannot improve and the boosting will always return 0: y_pred_raw = [0, 0, 0, 0, 0, 0]
y_pred_raw_with_init = scipy.special.logit(x["prob"]) + y_pred_raw
y_pred = scipy.special.expit(y_pred_raw_with_init)
This part above shows the way I suppose is correct to translate the initial scores together with the boosting back to probabilities. Since the boosting is zero y_pred yields [0.3, ..., 0.3] which is our initial probability.

Is it possible to get principal point from a projection matrix?

Is it possible to get principal point (cx, cy) from a 4x4 projection matrix? This is the same matrix asked in this question: Getting focal length and focal point from a projection matrix
(SCNMatrix4)
s = (m11 = 1.83226573,
m12 = 0,
m13 = 0,
m14 = 0,
m21 = 0,
m22 = 2.44078445,
m23 = 0,
m24 = 0,
m31 = -0.00576340035,
m32 = -0.0016724075,
m33 = -1.00019991,
m34 = -1,
m41 = 0,
m42 = 0,
m43 = -0.20002,
m44 = 0)
The values I'm trying to calculate in this 3x3 camera matrix is x0 and y0.
I recently confronted this problem, and quite astonished I couldn't find a relevant solution on Internet, because it seems to be a simple mathematics problem.
After a few days of struggling with matrices, I found a solution.
Let's define two Cartesian coordinate system, the camera coordinate system with x', y', z' axes, and the world coordinate system with x, y, z axes. The camera(or the eye) is positioned at the origin of the camera coordinate system and the image plane(a plane containing the screen) is z' = -n, where n is the focal length and the focal point is the position of the camera. I am using the convention of OpenGL and n is the nearVal argument of the glFrustum().
You can define a 4x4 transformation matrix M in a homogeneous coordinate system to deal with a projection. The M transforms a coordinate (x, y, z) in the world coordinate system into a coordinate (x', y', z') in the camera coordinate system like the following, where # means a matrix multiplication.
[
[x_prime_h],
[y_prime_h],
[z_prime_h],
[w_prime_h],
] = M # [
[x_h],
[y_h],
[z_h],
[w_h],
]
[x, y, z] = [x_h, y_h, z_h] / w_h
[x_prime, y_prime, z_prime] = [x_prime_h, y_prime_h, z_prime_h] / w_prime_h
Now assume you are given M = P V, where P is a perspective projection matrix and V is a view transformation matrix. The theoretical projection matrix is like the following.
P_theoretical = [
[n, 0, 0, 0],
[0, n, 0, 0],
[0, 0, n, 0],
[0, 0, -1, 0],
]
In OpenGL, an augmented matrix like the following is used to cover the normalization and nonlinear scaling on z coordinates, where l, r, b, t, n, f are the left, right, bottom, top, nearVal, farVal arguments of the glFrustum().(The resulting z' coordinate is not actually the coordinate of a projected point, but a value used for Z-buffering.)
P = [
[2*n/(r-l), 0, (r+l)/(r-l), 0],
[0, 2*n/(t-b), (t+b)/(t-b), 0],
[0, 0, -(f+n)/(f-n), -2*n*f/(f-n)],
[0, 0, -1, 0],
]
The transformation V is like the following, where r_ij is the element at i-th row and j-th column of the 3x3 rotational matrix R and (c_0, c_1, c_2) is the coordinate of the camera.
V = [
[r_00, r_01, r_02, -(r_00*c_0 + r_01*c_1 + r_02*c_2)],
[r_10, r_11, r_12, -(r_10*c_0 + r_11*c_1 + r_12*c_2)],
[r_20, r_21, r_22, -(r_20*c_0 + r_21*c_1 + r_22*c_2)],
[0, 0, 0, 1],
]
The P and V can be represented with block matrices like the following.
C = [
[c_0],
[c_1],
[c_2],
]
A = [
[2*n/(r-l), 0, (r+l)/(r-l)],
[0, 2*n/(t-b), (t+b)/(t-b)],
[0, 0, -(f+n)/(f-n)],
]
B = [
[0],
[0],
[-2*n*f/(f-n)],
]
P = [
[A,B],
[[0, 0, -1], [0]],
]
V = [
[R, -R # C],
[[0, 0, 0], [1]],
]
M = P # V = [
[A # R, -A # R # C + B],
[[0, 0, -1] # R, [0, 0, 1] # R # C],
]
Let m_ij be the element of M at i-th row and j-th column. Taking the first element of the second row of the above block notation of M, you can solve for the elementary z' vector of the camera coordinate system, the opposite direction from the camera point to the intersection point between the image plane and its normal line passing through the focal point.(The intersection point is the principal point.)
e_z_prime = [0, 0, 1] # R = -[m_30, m_31, m_32]
Taking the second column of the above block notation of M, you can solve for C like the following, where inv(X) is an inverse of a matrix X.
C = - inv([
[m_00, m_01, m_02],
[m_10, m_11, m_12],
[m_30, m_31, m_32],
]) # [
[m_03],
[m_13],
[m_33],
]
Let p_ij be the element of P at i-th row and j-th column.
Now you can solve for p_23 = -2nf/(f-n) like the following.
B = [
[m_03],
[m_13],
[m_23],
] + [
[m_00, m_01, m_02],
[m_10, m_11, m_12],
[m_20, m_21, m_22],
] # C
p_23 = B[2] = m_23 + (m_20*c_0 + m_21*c_1 + m_22*c_2)
Now using the fact p_20 = p_21 = 0, you can get p_22 = -(f+n)/(f-n) like the following.
p_22 * e_z_prime = [m_20, m_21, m_22]
p_22 = -(m_20*m_30 + m_21*m_31 + m_22*m_32)
Now you can get n and f from p_22 and p_23 like the following.
n = p_23/(p_22-1)
= -(m_23 + m_20*c_0+m_21*c_1+m_22*c_2) / (m_20*m_30+m_21*m_31+m_22*m_32 + 1)
f = p_23/(p_22+1)
= -(m_23 + m_20*c_0+m_21*c_1+m_22*c_2) / (m_20*m_30+m_21*m_31+m_22*m_32 - 1)
From the camera position C, the focal length n and the elementary z' vector e_z_prime, you can get the principal point, C - n * e_z_prime.
As a side note, you can prove the input matrix of inv() in the formula for getting C is nonsingular. And you can also find elementary x' and y' vectors of the camera coordinate system, and find the l, r, b, t using these vectors.(There will be two valid solutions for the (e_x_prime, e_y_prime, l, r, b, t) tuple, due to the symmetry.) And finally this solution can be expanded when the transformation matrix is mixed with the world transformation which does an anisotropic scaling, that is when M = P V W and W can have unequal eigenvalues.

pure ruby: calculate sparse matrix rank fast(er)

How do I speed up the rank calculation of a sparse matrix in pure ruby?
I'm currently calculating the rank of a matrix (std lib) to determine the rigidity of a graph.
That means I have a sparse matrix of about 2 rows * 9 columns to about 300 rows * 300 columns.
That translates to times of several seconds to determine the rank of the matrix, which is very slow for a GUI application.
Because I use Sketchup I am bound to Ruby 2.0.0.
I'd like to avoid the hassle of setting up gcc on windows, so nmatrix is (I think) not a good option.
Edit:
Example matrix:
[[12, -21, 0, -12, 21, 0, 0, 0, 0],
[12, -7, -20, 0, 0, 0, -12, 7, 20],
[0, 0, 0, 0, 14, -20, 0, -14, 20]]
Edit2:
I am using integers instead of floats to speed it up considerably.
I have also added a fail fast mechanism earlier in the code in order to not call the slow rank function at all.
Edit3:
Part of the code
def rigid?(proto_matrix, nodes)
matrix_base = Array.new(proto_matrix.size) { |index|
# initialize the row with 0
arr = Array.new(nodes.size * 3, 0.to_int)
proto_row = proto_matrix[index]
# ids of the nodes in the graph
node_ids = proto_row.map { |hash| hash[:id] }
# set the values of both of the nodes' positions
[0, 1].each { |i|
vertex_index = vertices.find_index(node_ids[i])
# predetermined vector associated to the node
vec = proto_row[i][:vec]
arr[vertex_index * 3] = vec.x.to_int
arr[vertex_index * 3 + 1] = vec.y.to_int
arr[vertex_index * 3 + 2] = vec.z.to_int
}
arr
}
matrix = Matrix::rows(matrix_base, false)
rank = matrix.rank
# graph is rigid if the rank of the matrix is bigger or equal
# to the amount of node coordinates minus the degrees of freedom
# of the whole graph
rank >= nodes.size * 3 - 6
end

Failing to solve a simple least squares fit with Ruby GSL

I have the following ruby script, running with rb-gsl (1.16.0.6) under ruby-2.2.1
require("gsl")
include GSL
m = GSL::Matrix::alloc([0.18, 0.60, 0.57], [0.24, 0.99, 0.58],
[0.14, 0.30, 0.97], [0.51, 0.19, 0.85], [0.34, 0.91, 0.18])
B = GSL::Vector[1, 2, 3, 4, 5]
qr, tau = m.QR_decomp
x, res = qr.QR_lssolve(tau,B)
The resulting error is:
testls.rb:9:in QR_lssolve: Ruby/GSL error code 19, matrix size must match >solution size (file qr.c, line 193), matrix/vector sizes are not conformant
(GSL::ERROR::EBADLEN)
from testls.rb:9:in
I think my matrices have the right dimensions for an over-determined LS problem, so I can't understand the error message.
In Matlab, I can write:
m=[[0.18, 0.60, 0.57]; [0.24, 0.99, 0.58];...
[0.14, 0.30, 0.97]; [0.51, 0.19, 0.85]; [0.34, 0.91, 0.18]];
B=[1, 2, 3, 4, 5]';
x=m\B
and get
x =
8.0683
0.8844
0.2319
I would like to make the matrix/vector sizes conformant and still express an over-determined problem. It would seem from the GSL documentation that
gsl_linalg_QR_lssolve (const gsl_matrix * QR, const gsl_vector * tau, const
gsl_vector * b, gsl_vector * x, gsl_vector * residual)
is well-suited to task at hand, so is it the binding to Ruby that is broken or my understanding of the correct usage? All help will be appreciated.
Answering my own question: It seems you have to preallocate vectors for the solution x, and the residual and pass them as arguments:
require("gsl")
include GSL
m = GSL::Matrix::alloc([0.18, 0.60, 0.57], [0.24, 0.99, 0.58],
[0.14, 0.30, 0.97], [0.51, 0.19, 0.85], [0.34, 0.91, 0.18])
B = GSL::Vector[1, 2, 3, 4, 5]
qr, tau = m.QR_decomp
x = GSL::Vector.alloc(3)
r = GSL::Vector.alloc(5)
qr.QR_lssolve(tau,B,x,r)
p x
This does indeed yield
GSL::Vector
[ 8.068e+00 8.844e-01 2.319e-01 ]
Charles.

Matrix derivative doesn't get evaluated

I'm trying to evaluate the partial derivative of the most general 3D rotation matrix, like this:
phi, psi, theta = sympy.symbols("phi, psi, theta")
RMatrixPhi = sympy.Matrix([[cos(phi), sin(phi), 0],
[-sin(phi), cos(phi), 0],
[0, 0, 1]])
RMatrixPsi = sympy.Matrix([[cos(psi), 0, sin(psi)],
[0, 1, 0 ],
[-sin(psi), 0, cos(psi)]])
RMatrixTheta = sympy.Matrix([[1, 0, 0 ],
[0, cos(theta), sin(theta)],
[0, -sin(theta), cos(theta) ]])
RMatrix = RMatrixPhi * RMatrixPsi * RMatrixTheta
D = diff(RMatrix, phi)
However,D is then a sympy.Derivative object, and I cannot get it evaluated,
it's just printed out as Derivative(Matrix(...))
The only way I could get it working is by writing
sympy.Matrix([sympy.diff(r, phi) for r in RMatrix]).reshape(3,3)
but that looks ugly. What's the right way to compute such derivatives?
The Matrix class has a method called diff which, according to the documentation ...
Docstring:
Calculate the derivative of each element in the matrix.
So use
RMatrix.diff(phi)
to perform element-wise derivation.

Resources