How to call dot for Eigen::Array - eigen

I have to matrices and would like to treat them as a 1-D list and do a dot product. I the following, but it is not working:
Eigen::MatrixXf a(9,9), b(9,9);
float r = a.array().dot(b.array());
What would be the best way to do it?

Computing the coefficient-wise product of 2 matrices is a common pattern, so Eigen provides the cwiseProduct() method to write it elegantly. This would lead to the following expression:
float r = a.cwiseProduct(b).sum();

Try this. :)
Eigen::MatrixXf a(9, 9), b(9, 9);
Eigen::Map<Eigen::VectorXf> aVector(a.data(), 81);
Eigen::Map<Eigen::VectorXf> bVector(b.data(), 81);
float squareError = aVector.dot(bVector);
Here is documentation about Map.

Actually I found it out:
float r = (a.array()*b.array()).sum();

Related

How do I calculate inner product of two vectors in nalgebra?

From the following
let v = OVector::<f64, U2>::from_column_slice(&[3_f64, 4_f64]);
let x = &v.transpose() * &v; // get the inner product, i.e. <v,v>
I expected x to be a f64 scalar, i.e. x = 25.0.
But actually, I can only obtain x as OMatrix::<f64, Cosnt<1>, Const<1>>.
The case can be even worse in matrix product operations. for example, the following code doesn't work since v^T v is not a scalar.
let m = OMatrix::<f64, U2, U2>::from_element(1.0);
let v = OVector::<f64, U2>::from_column_slice(&[3_f64, 4_f64]);
// not working
let y = &v.transpose() * &v * m; // types conflict
// working
let y = 25.0 * m; // expected to behave like this
What is the correct way to do this?
Usually, in maths, you would identify 1x1 matrices with scalars (because, for some definition of being equivalent, they are equivalent...). When doing this, the dot product of two vectors is exactly the dot product between two matrices, when we see vectors as matrix columns (which are also equivalent for some equivalence...).
However, here, it is not the case: Rust has to know what is the type of the data. So, I would suggest, since you are using matrices to start with, to use the actual matrix dot product, not the vector one. It's simply (v.transpose()*v).trace(). This is a more general dot product, but notice taking the trace will exactly "extract" the scalar from the 1x1 matrix.
Otherwise, this operation is already defined as the dot product (unsurprisingly): v.dot(v).

sympy nsolve with MatrixSymbol

I'd like to numerically solve an equation involving a MatrixSymbol. Here's a basic example:
import sympy as sy
v = sy.MatrixSymbol('v', 2, 1)
equation = (v - sy.Matrix([17, 23])).as_explicit()
I'd like something like:
sy.nsolve(equation, v, sy.Matrix([0,0]))
But because nsolve does not accept MatrixSymbols, I've made a cludgy workaround that gives the correct output of Matrix([[17.0], [23.0]]):
vx, vy = sy.symbols('v_x v_y')
sy.nsolve(equation.subs(v, sy.Matrix([vx, vy])), [vx, vy], [0,0])
Essentially, I've converted a MatrixSymbol to a matrix of Symbols to make nsolve happy.
Is there a better way I should be doing this?
Edit: the workaround can be simplified to:
vseq = sy.symbols('a b') #names must be distinct
sy.nsolve(equation.subs(v, sy.Matrix(vseq)), vseq, [0,0])
But there ought to be a cleaner way to convert a MatrixSymbol to a sequence of Symbols, or a way to avoid needing to do so in the first place.
A cleaner way is to create a Matrix from symarray:
v = sy.Matrix(sy.symarray("v", (2,)))
equation = v - sy.Matrix([17, 23])
sy.nsolve(equation, v, [0, 0])
Here, symarray creates a (NumPy) array of symbols [v_0, v_1] which is then turned into a Matrix. One can also use sy.symarray("v", (2, 1)) so it's a double array, but since SymPy's Matrix constructor is cool with 1D inputs, this is not necessary.

c++ eigen A.inverse()*B not equal to A.ldlt().solve(B)

I would like to compute the trace of the product of two given matrices, say A and B, Trace(AInv * B) where * is the regular matrix product, AInv is the inverse of A (being symmetric and positive definite) and B is symmetric.
Solution 1: computing the inverse explicitely
Noting that Trace(AInv * B) is equivalent to taking the sum of the componentwise product of AInv and B:
double sol1 = (A.inverse().cwiseProduct(B)).sum();
Solution 2: using ldlt decomposition from the Eigen library
double sol2 = (A.selfadjointView<Lower>().ldlt().solve(B)).trace();
Theoretically, these solutions should be the same, but in my test, they don't. Sounds like I am missing something. As .ldlt().solve() is not made to compute matrix inverse but rather solve a linear system, my question is : does .ldlt() perform any sort of normalization? If not, what I am doing wrong?
Many thanks!
The statement to compute sol1 is wrong: you need to either transpose one of the operands or use a matrix-matrix product: correct versions:
double sol1 = (A.inverse().cwiseProduct(B.transpose())).sum();
double sol1 = (A.inverse().lazyProduct(B)).diagonal().sum();
double sol1 = (A.inverse().lazyProduct(B)).trace();
double sol1 = (A.inverse() * B).diagonal().sum();
double sol1 = (A.inverse() * B).trace();
Note that, in Eigen, when you write (A*B).diagonal() only diagonal elements of A*B are computed;, not the off-diagonal ones.
In general, it is not recommended to explicitly compute the inverse of a matrix, and using either A.lu().solve(B) or A.ldlt().solve(B) will give you more accurate results and will be faster too because, unless A is very small (2, 3, 4), A.inverse() is equivalent to A.lu().solve(I). In the future, Eigen will very likely rewrite expressions like:
A.inverse() * B
as:
A.lu().solve(B)
for you anyway.

Create NxN matrix mathematica

Having a bit of trouble generating an NxN matrix in Mathematica. Given the value of N, I need to construct the NxN matrix that looks like the following:
N = Input["Enter value for N:"];
matrix = ConsantArray[0,{N,N}];
Do[matrix[[i,j]] = **"???"** ,{i,N}, {j,N}]
matrix // Matrix Form
Not sure in what should go as my statement in Do-Loop. Any help would appreciate it.
You could create a 1D array [1 ... n2] and then reshape or partition it to a matrix.
matrix = ArrayReshape[Range[n^2], {n, n}]
(* also works: *)
matrix = Partition[Range[n^2], n]
a couple more ways.
matrix=Table[j+(i-1) n,{i,n},{j,n}]
matrix=Array[#2+(#1-1) n &,{n,n}]
the Table form should give a clue how to fix your Do as well, but that's usually a poor approach performance-wise.
do not use capital N by the way its a reserved symbol.

scala breeze multiply matrix by transpose

I want to multiply two matrices. A * B works just fine. But what I really want is A.t * B. But after transposing A, the result becomes Transpose[Matrix[Double]] instead of Matrix[Double]. As a result the operation is rejected by the compiler. However, mathematically, the transpose of a matrix is another matrix, and it should be perfectly ok to multiply that by another matrix. How is this properly done in breeze?
A.t.asInstanceOf[DenseMatrix[Double]] did the trick.
I had a similar problem when I was using a plain Matrix type in Breeze, for example something like this:
def buildMatrix(): Matrix[Double] = {
DenseMatrix((1.0, 2.0, 3.0), (4.0, 5.0, 6.0))
}
val m = buildMatrix()
val t = m.t
m * t
gives me the compiler error Error:(13, 69) could not find implicit value for parameter op: breeze.linalg.operators.OpMulMatrix.Impl2[breeze.linalg.Matrix[Double],breeze.linalg.Transpose[breeze.linalg.Matrix[Double]],That]
But if I make sure that the matrix I'm transposing is a DenseMatrix, like this:
val m = buildMatrix().toDenseMatrix
Then the * operator works fine.

Resources