Ruby Matrix::Eigenvalue Decomposition error - ruby

I looked at the eigenvector matrix of a given matrix, but when I try to inverse it I have an error in eigenvector_matrix_inv().
require 'matrix'
m = Matrix[ [0.5703125, 1.8369140625, 0.0, 0.0],
[-0.6875, -0.4609375, 0.0, 0.0],
[0.0, 0.0, -2.1796875, 8.7119140625],
[0.0, 0.0, -0.6875, 2.2890625] ]
meigen = m.eigen.eigenvector_matrix
meiveni = m.eigen.eigenvector_matrix_inv
# .../matrix.rb:930:in `block in inverse_from': Not Regular Matrix (ExceptionForMatrix::ErrNotRegular)
It should not be singular, as checked with Mathematica:
mruby = {{0.5703125, 1.8369140625, 0.0, 0.0}, {-0.6875, -0.4609375,
0.0, 0.0}, {0.0, 0.0, -2.1796875, 8.7119140625}, {0.0,
0.0, -0.6875, 2.2890625}};
Inverse[Eigenvectors[mruby]]
giving
{{0.586146 - 0.302685 I, 0.586146 + 0.302685 I, 0. + 0. I,
0. + 0. I}, {0. - 1.07831 I, 0. + 1.07831 I, 0. + 0. I,
0. + 0. I}, {0. + 0. I, 0. + 0. I, 0.519354 + 1.16217 I,
0.519354 - 1.16217 I}, {0. + 0. I, 0. + 0. I, 0. - 4.53135 I,
0. + 4.53135 I}}
What am I doing wrong ?
Should I take special care of something particular in Ruby ?

You don't invert a matrix when you do eigenvalue problems. There are lots of algorithms, but inversion isn't one of them.
Your matrix is a bit odd: You've got two positive and two negative diagonal elements. I think the eigenvectors having complex entries suggests that it's not what you'd usually have: real eigenvalues with real eigenvectors.
Either your matrix is incorrect or you've chosen the wrong algorithm. See if a Hessian matrix is what you have and look at appropriate algorithms.

Related

Compute (Aᵀ×A)*(Bᵀ×B) matrix using gsl, Blas and Lapack

Let I have 2 symmetric matrices:
A = {{1,2}, {2,3}}
B = {{2,3},{3,4}}
Can I compute the matrix (AT×A)*(BT×B) using gsl, Blas and Lapack?
I'm using
gsl_blas_dsyrk(CblasUpper, CblasTrans, 1.0, A, 0.0, ATA);
gsl_blas_dsyrk(CblasUpper, CblasTrans, 1.0, B, 0.0, BTB);
gsl_blas_dsymm(CblasLeft, CblasUpper, 1.0, ATA, BTB, 0.0, ATABTB); // It doesn't work
It returns:
(Aᵀ·A) = ATA = {{5, 8}, {0, 13}} -- ok, gsl_blas_dsyrk returns symmetric matrix as upper triangular matrix.
(Bᵀ·B) = BTB = {{13, 8}, {0, 25}} -- ok.
(Aᵀ·A)·(Bᵀ·B) = ATABTB = {{65, 290}, {104, 469}} -- it's wrong.
Symmetrize BTB and the problem will be solved.
As you noticed, the upper triangular parts of symmetric matrices are computed by dsyrk(). Then dsymm() is applied. According to the definition of dsymm(), the following operation is performed since the flag CblasLeft is used:
C := alpha*A*B + beta*C
where alpha and beta are scalars, A is a symmetric matrix and B and
C are m by n matrices.
Indeed, the B matrix is a general matrix, not necessarly a symmetric one. As a result, ATA is multiplied by the upper triangular part of BTB, since the lower triangular part of BTB is not computed.
Symmetrize BTB and the problem will be solved. To do so, for loops is a straightforward solution , see Convert symmetric matrix between packed and full storage?

Finding an optimal selection in a 2D matrix with given constrains

Problem statement
Given a m x n matrix where m <= n you have to select entries so that their sum is maximal.
However you can only select one entry per row and at most one per column.
The performance is also a huge factor which means its ok to find selections that are not optimal in oder to reduce complexity (as long as its better than selecting random entries)
Example
Valid selections:
Invalid selections:
(one entry per row and at most one per column)
My Approaches
Select best of k random permutations
A = createRandomMatrix(m,n)
selections = list()
for try in range(k):
cols = createRandomIndexPermutation(m) # with no dublicates
for row in range(m):
sum += A[row, cols[row]]
selections.append(sum)
result = max(selections)
This appoach performs poorly when n is significantly larger than m
Best possible (not yet taken) column per row
A = createRandomMatrix(m,n)
takenCols = set()
result = 0
for row in range(m):
col = getMaxColPossible(row, takenCols, A)
result += A[row, col]
takenCols.add(col)
This approach always values the rows (or columns) higher that were discovered first which could lead to worse than average results
This sounds exactly like the rectangular linear assignment problem (RLAP). This problem can be efficiently (in terms of asymptotic complexity; somewhat around cubic time) solved (to a global-optimum) and a lot of software is available.
The basic approaches are LAP + dummy-vars, LAP-modifications or more general algorithms like network-flows (min-cost max-flow).
You can start with (pdf):
Bijsterbosch, J., and A. Volgenant. "Solving the Rectangular assignment problem and applications." Annals of Operations Research 181.1 (2010): 443-462.
Small python-example using python's common scientific-stack:
Edit: as mentioned in the comments, negating the cost-matrix (which i did, motivated by the LP-description) is not what's done in the Munkres/Hungarian-method literature. The strategy is to build a profit-matrix from the cost-matrix, which is now reflected in the example. This approach will lead to a non-negative cost-matrix (sometimes assumes; if it's important, depends on the implementation). More information is available in this question.
Code
import numpy as np
import scipy.optimize as sopt # RLAP solver
import matplotlib.pyplot as plt # visualizatiion
import seaborn as sns # """
np.random.seed(1)
# Example data from
# https://matplotlib.org/gallery/images_contours_and_fields/image_annotated_heatmap.html
# removed a row; will be shuffled to make it more interesting!
harvest = np.array([[0.8, 2.4, 2.5, 3.9, 0.0, 4.0, 0.0],
[2.4, 0.0, 4.0, 1.0, 2.7, 0.0, 0.0],
[1.1, 2.4, 0.8, 4.3, 1.9, 4.4, 0.0],
[0.6, 0.0, 0.3, 0.0, 3.1, 0.0, 0.0],
[0.7, 1.7, 0.6, 2.6, 2.2, 6.2, 0.0],
[1.3, 1.2, 0.0, 0.0, 0.0, 3.2, 5.1]],)
harvest = harvest[:, np.random.permutation(harvest.shape[1])]
# scipy: linear_sum_assignment -> able to take rectangular-problem!
# assumption: minimize -> cost-matrix to profit-matrix:
# remove original cost from maximum-costs
# Kuhn, Harold W.:
# "Variants of the Hungarian method for assignment problems."
max_cost = np.amax(harvest)
harvest_profit = max_cost - harvest
row_ind, col_ind = sopt.linear_sum_assignment(harvest_profit)
sol_map = np.zeros(harvest.shape, dtype=bool)
sol_map[row_ind, col_ind] = True
# Visualize
f, ax = plt.subplots(2, figsize=(9, 6))
sns.heatmap(harvest, annot=True, linewidths=.5, ax=ax[0], cbar=False,
linecolor='black', cmap="YlGnBu")
sns.heatmap(harvest, annot=True, mask=~sol_map, linewidths=.5, ax=ax[1],
linecolor='black', cbar=False, cmap="YlGnBu")
plt.tight_layout()
plt.show()
Output

algorithm for calculating texture tile coordinates in 3d model generated based on height map

I'm converting height map into 3d model, I have a small texture that meant to represent 4 points (i.e. 2 triangles). The problem is a texture tiling when a height difference between points is too large. I'd like to tile my texture in order to avoid high stretching, but I have problem implement general-case algorithm. I have rendered pictures, to represent what I mean:
in this examples I use only half of texture (1 triangle), all texture coordinates I set manually.
please help me find algorithm for calculation texture coordinates in general case
code:
class procedure GraphicEngine.AddHeightmapQuad(X, Y, Z1, Z2, Z3, Z4: Double);
const
Length = 1;
var
Vertices: PVertex;
v1, v2, v3, v4: TD3DVector;
begin
OleCheck(Triangles.Lock(TrianglesCount * 3 * SizeOf(TVertex), 2 * 3 * SizeOf(TVertex), Pointer(Vertices), 0));
Dec(Vertices);
Vertices[1]:= TVertex.Create(X , Y , Z1, 0, 0);
Vertices[2]:= TVertex.Create(X + Length, Y , Z2, 1, 0);
Vertices[3]:= TVertex.Create(X , Y + Length, Z3, 0, 1);
Vertices[4]:= TVertex.Create(X + Length, Y , Z2, 1, 0);
Vertices[5]:= TVertex.Create(X , Y + Length, Z3, 0, 1);
Vertices[6]:= TVertex.Create(X + Length, Y + Length, Z4, 1, 1);
D3DXVec3Subtract(v1, Vertices[2].vec, Vertices[1].vec);
D3DXVec3Subtract(v2, Vertices[3].vec, Vertices[1].vec);
D3DXVec3Subtract(v3, Vertices[4].vec, Vertices[6].vec);
D3DXVec3Subtract(v4, Vertices[5].vec, Vertices[6].vec);
Vertices[2].U:= Sqrt(1 + Sqr(v1.z));
Vertices[3].V:= Sqrt(1 + Sqr(v2.z));
Vertices[4].U:= Sqrt(1 + Sqr(v4.z));
Vertices[5].V:= Sqrt(1 + Sqr(v3.z));
Vertices[6].U:= Sqrt(1 + Sqr(v4.z));
Vertices[6].V:= Sqrt(1 + Sqr(v3.z));
Inc(TrianglesCount, 2);
OleCheck(Triangles.Unlock);
end;
You can compute the exact stretch on the texture by calculating the actual size of the triangle. Even easier you can simply compute the length of the edges to adjust the coordinates. Fix one point to be 0,0 in the texture then the next one instead of 0, 1 would be 0, sqrt(1 + (h1 - h2)^2) and the same for the other point of the grid (h1 and h2 are the heights of the points). To avoid mismatches make sure you copy the coordinates when you do the next cell.

Choose item from a list with bias?

Given a list of items x1 ... xn and associated probabilties p1 ... pn that sum up to 1 there's a well known procedure to select a random item with its associated proabability by sorting the list according to weight, choosing a random value between 1 and 0, and adding up to a culmination sum until it exceeds the value selected and return the item at this point.
So if we have x1 -> 0.5, x2 -> 0.3, x3 -> 0.2, if the randomly chosen value is less than 0.5 x1 will be chosen, if between 0.5 and 0.8, x2, and else x3.
This requires sorting so it needs O(nlogn) time. Is there anything more efficient than that?
I don't think you would actually need to sort the list for the algorithm to work.
x1 = 0.2, x2 = 0.7, x3 = 0.1
If you sort then you have:
x3: 0.0 to 0.1 = 10%
x1: 0.1 to 0.3 = 20%
x2: 0.3 to 1.0 = 70%
If you dont, you instead get:
x1: 0.0 to 0.2 = 20%
x2: 0.2 to 0.9 = 70%
x3: 0.9 to 1.0 = 10%
Just eliminating the sort and iterating through would make it O(n).

3D Least Squares Plane

What's the algorithm for computing a least squares plane in (x, y, z) space, given a set of 3D data points? In other words, if I had a bunch of points like (1, 2, 3), (4, 5, 6), (7, 8, 9), etc., how would one go about calculating the best fit plane f(x, y) = ax + by + c? What's the algorithm for getting a, b, and c out of a set of 3D points?
If you have n data points (x[i], y[i], z[i]), compute the 3x3 symmetric matrix A whose entries are:
sum_i x[i]*x[i], sum_i x[i]*y[i], sum_i x[i]
sum_i x[i]*y[i], sum_i y[i]*y[i], sum_i y[i]
sum_i x[i], sum_i y[i], n
Also compute the 3 element vector b:
{sum_i x[i]*z[i], sum_i y[i]*z[i], sum_i z[i]}
Then solve Ax = b for the given A and b. The three components of the solution vector are the coefficients to the least-square fit plane {a,b,c}.
Note that this is the "ordinary least squares" fit, which is appropriate only when z is expected to be a linear function of x and y. If you are looking more generally for a "best fit plane" in 3-space, you may want to learn about "geometric" least squares.
Note also that this will fail if your points are in a line, as your example points are.
The equation for a plane is: ax + by + c = z. So set up matrices like this with all your data:
x_0 y_0 1
A = x_1 y_1 1
...
x_n y_n 1
And
a
x = b
c
And
z_0
B = z_1
...
z_n
In other words: Ax = B. Now solve for x which are your coefficients. But since (I assume) you have more than 3 points, the system is over-determined so you need to use the left pseudo inverse. So the answer is:
a
b = (A^T A)^-1 A^T B
c
And here is some simple Python code with an example:
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
N_POINTS = 10
TARGET_X_SLOPE = 2
TARGET_y_SLOPE = 3
TARGET_OFFSET = 5
EXTENTS = 5
NOISE = 5
# create random data
xs = [np.random.uniform(2*EXTENTS)-EXTENTS for i in range(N_POINTS)]
ys = [np.random.uniform(2*EXTENTS)-EXTENTS for i in range(N_POINTS)]
zs = []
for i in range(N_POINTS):
zs.append(xs[i]*TARGET_X_SLOPE + \
ys[i]*TARGET_y_SLOPE + \
TARGET_OFFSET + np.random.normal(scale=NOISE))
# plot raw data
plt.figure()
ax = plt.subplot(111, projection='3d')
ax.scatter(xs, ys, zs, color='b')
# do fit
tmp_A = []
tmp_b = []
for i in range(len(xs)):
tmp_A.append([xs[i], ys[i], 1])
tmp_b.append(zs[i])
b = np.matrix(tmp_b).T
A = np.matrix(tmp_A)
fit = (A.T * A).I * A.T * b
errors = b - A * fit
residual = np.linalg.norm(errors)
print("solution:")
print("%f x + %f y + %f = z" % (fit[0], fit[1], fit[2]))
print("errors:")
print(errors)
print("residual:")
print(residual)
# plot plane
xlim = ax.get_xlim()
ylim = ax.get_ylim()
X,Y = np.meshgrid(np.arange(xlim[0], xlim[1]),
np.arange(ylim[0], ylim[1]))
Z = np.zeros(X.shape)
for r in range(X.shape[0]):
for c in range(X.shape[1]):
Z[r,c] = fit[0] * X[r,c] + fit[1] * Y[r,c] + fit[2]
ax.plot_wireframe(X,Y,Z, color='k')
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('z')
plt.show()
unless someone tells me how to type equations here, let me just write down the final computations you have to do:
first, given points r_i \n \R, i=1..N, calculate the center of mass of all points:
r_G = \frac{\sum_{i=1}^N r_i}{N}
then, calculate the normal vector n, that together with the base vector r_G defines the plane by calculating the 3x3 matrix A as
A = \sum_{i=1}^N (r_i - r_G)(r_i - r_G)^T
with this matrix, the normal vector n is now given by the eigenvector of A corresponding to the minimal eigenvalue of A.
To find out about the eigenvector/eigenvalue pairs, use any linear algebra library of your choice.
This solution is based on the Rayleight-Ritz Theorem for the Hermitian matrix A.
See 'Least Squares Fitting of Data' by David Eberly for how I came up with this one to minimize the geometric fit (orthogonal distance from points to the plane).
bool Geom_utils::Fit_plane_direct(const arma::mat& pts_in, Plane& plane_out)
{
bool success(false);
int K(pts_in.n_cols);
if(pts_in.n_rows == 3 && K > 2) // check for bad sizing and indeterminate case
{
plane_out._p_3 = (1.0/static_cast<double>(K))*arma::sum(pts_in,1);
arma::mat A(pts_in);
A.each_col() -= plane_out._p_3; //[x1-p, x2-p, ..., xk-p]
arma::mat33 M(A*A.t());
arma::vec3 D;
arma::mat33 V;
if(arma::eig_sym(D,V,M))
{
// diagonalization succeeded
plane_out._n_3 = V.col(0); // in ascending order by default
if(plane_out._n_3(2) < 0)
{
plane_out._n_3 = -plane_out._n_3; // upward pointing
}
success = true;
}
}
return success;
}
Timed at 37 micro seconds fitting a plane to 1000 points (Windows 7, i7, 32bit program)
This reduces to the Total Least Squares problem, that can be solved using SVD decomposition.
C++ code using OpenCV:
float fitPlaneToSetOfPoints(const std::vector<cv::Point3f> &pts, cv::Point3f &p0, cv::Vec3f &nml) {
const int SCALAR_TYPE = CV_32F;
typedef float ScalarType;
// Calculate centroid
p0 = cv::Point3f(0,0,0);
for (int i = 0; i < pts.size(); ++i)
p0 = p0 + conv<cv::Vec3f>(pts[i]);
p0 *= 1.0/pts.size();
// Compose data matrix subtracting the centroid from each point
cv::Mat Q(pts.size(), 3, SCALAR_TYPE);
for (int i = 0; i < pts.size(); ++i) {
Q.at<ScalarType>(i,0) = pts[i].x - p0.x;
Q.at<ScalarType>(i,1) = pts[i].y - p0.y;
Q.at<ScalarType>(i,2) = pts[i].z - p0.z;
}
// Compute SVD decomposition and the Total Least Squares solution, which is the eigenvector corresponding to the least eigenvalue
cv::SVD svd(Q, cv::SVD::MODIFY_A|cv::SVD::FULL_UV);
nml = svd.vt.row(2);
// Calculate the actual RMS error
float err = 0;
for (int i = 0; i < pts.size(); ++i)
err += powf(nml.dot(pts[i] - p0), 2);
err = sqrtf(err / pts.size());
return err;
}
As with any least-squares approach, you proceed like this:
Before you start coding
Write down an equation for a plane in some parameterization, say 0 = ax + by + z + d in thee parameters (a, b, d).
Find an expression D(\vec{v};a, b, d) for the distance from an arbitrary point \vec{v}.
Write down the sum S = \sigma_i=0,n D^2(\vec{x}_i), and simplify until it is expressed in terms of simple sums of the components of v like \sigma v_x, \sigma v_y^2, \sigma v_x*v_z ...
Write down the per parameter minimization expressions dS/dx_0 = 0, dS/dy_0 = 0 ... which gives you a set of three equations in three parameters and the sums from the previous step.
Solve this set of equations for the parameters.
(or for simple cases, just look up the form). Using a symbolic algebra package (like Mathematica) could make you life much easier.
The coding
Write code to form the needed sums and find the parameters from the last set above.
Alternatives
Note that if you actually had only three points, you'd be better just finding the plane that goes through them.
Also, if the analytic solution in unfeasible (not the case for a plane, but possible in general) you can do steps 1 and 2, and use a Monte Carlo minimizer on the sum in step 3.
CGAL::linear_least_squares_fitting_3
Function linear_least_squares_fitting_3 computes the best fitting 3D
line or plane (in the least squares sense) of a set of 3D objects such
as points, segments, triangles, spheres, balls, cuboids or tetrahedra.
http://www.cgal.org/Manual/latest/doc_html/cgal_manual/Principal_component_analysis_ref/Function_linear_least_squares_fitting_3.html
It sounds like all you want to do is linear regression with 2 regressors. The wikipedia page on the subject should tell you all you need to know and then some.
All you'll have to do is to solve the system of equations.
If those are your points:
(1, 2, 3), (4, 5, 6), (7, 8, 9)
That gives you the equations:
3=a*1 + b*2 + c
6=a*4 + b*5 + c
9=a*7 + b*8 + c
So your question actually should be: How do I solve a system of equations?
Therefore I recommend reading this SO question.
If I've misunderstood your question let us know.
EDIT:
Ignore my answer as you probably meant something else.
We first present a linear least-squares plane fitting method that minimizes the residuals between the estimated normal vector and provided points.
Recall that the equation for a plane passing through origin is Ax + By + Cz = 0, where (x, y, z) can be any point on the plane and (A, B, C) is the normal vector perpendicular to this plane.
The equation for a general plane (that may or may not pass through origin) is Ax + By + Cz + D = 0, where the additional coefficient D represents how far the plane is away from the origin, along the direction of the normal vector of the plane. [Note that in this equation (A, B, C) forms a unit normal vector.]
Now, we can apply a trick here and fit the plane using only provided point coordinates. Divide both sides by D and rearrange this term to the right-hand side. This leads to A/D x + B/D y + C/D z = -1. [Note that in this equation (A/D, B/D, C/D) forms a normal vector with length 1/D.]
We can set up a system of linear equations accordingly, and then solve it by an Eigen solver in C++ as follows.
// Example for 5 points
Eigen::Matrix<double, 5, 3> matA; // row: 5 points; column: xyz coordinates
Eigen::Matrix<double, 5, 1> matB = -1 * Eigen::Matrix<double, 5, 1>::Ones();
// Find the plane normal
Eigen::Vector3d normal = matA.colPivHouseholderQr().solve(matB);
// Check if the fitting is healthy
double D = 1 / normal.norm();
normal.normalize(); // normal is a unit vector from now on
bool planeValid = true;
for (int i = 0; i < 5; ++i) { // compare Ax + By + Cz + D with 0.2 (ideally Ax + By + Cz + D = 0)
if ( fabs( normal(0)*matA(i, 0) + normal(1)*matA(i, 1) + normal(2)*matA(i, 2) + D) > 0.2) {
planeValid = false; // 0.2 is an experimental threshold; can be tuned
break;
}
}
We then discuss its equivalence to the typical SVD-based method and their comparison.
The aforementioned linear least-squares (LLS) method fits the general plane equation Ax + By + Cz + D = 0, whereas the SVD-based method replaces D with D = - (Ax0 + By0 + Cz0) and fits the plane equation A(x-x0) + B(y-y0) + C(z-z0) = 0, where (x0, y0, z0) is the mean of all points that serves as the origin of the new local coordinate frame.
Comparison between two methods:
The LLS fitting method is much faster than the SVD-based method, and is suitable for use when points are known to be roughly in a plane shape.
The SVD-based method is more numerically stable when the plane is far away from origin, because the LLS method would require more digits after decimal to be stored and processed in such cases.
The LLS method can detect outliers by checking the dot product residual between each point and the estimated normal vector, whereas the SVD-based method can detect outliers by checking if the smallest eigenvalue of the covariance matrix is significantly smaller than the two larger eigenvalues (i.e. checking the shape of the covariance matrix).
We finally provide a test case in C++ and MATLAB.
// Test case in C++ (using LLS fitting method)
matA(0,0) = 5.4637; matA(0,1) = 10.3354; matA(0,2) = 2.7203;
matA(1,0) = 5.8038; matA(1,1) = 10.2393; matA(1,2) = 2.7354;
matA(2,0) = 5.8565; matA(2,1) = 10.2520; matA(2,2) = 2.3138;
matA(3,0) = 6.0405; matA(3,1) = 10.1836; matA(3,2) = 2.3218;
matA(4,0) = 5.5537; matA(4,1) = 10.3349; matA(4,2) = 1.8796;
// With this sample data, LLS fitting method can produce the following result
// fitted normal vector = (-0.0231143, -0.0838307, -0.00266429)
// unit normal vector = (-0.265682, -0.963574, -0.0306241)
// D = 11.4943
% Test case in MATLAB (using SVD-based method)
points = [5.4637 10.3354 2.7203;
5.8038 10.2393 2.7354;
5.8565 10.2520 2.3138;
6.0405 10.1836 2.3218;
5.5537 10.3349 1.8796]
covariance = cov(points)
[V, D] = eig(covariance)
normal = V(:, 1) % pick the eigenvector that corresponds to the smallest eigenvalue
% normal = (0.2655, 0.9636, 0.0306)

Resources