I am looking for Eigen in C++ equivalent of Matlab's expm() function.
Would anyone know how to find this?
Please see Matlab's documentation for the expm() function.
https://www.mathworks.com/help/matlab/ref/expm.html
See matrix function module:
#include <unsupported/Eigen/MatrixFunctions>
MatrixXd A = ..., expA;
expA = A.exp();
Related
in the literature the conjugate gradient method is typically presented for real symmetric positive-definite matrices. However, in the description of the CG method in the Eigen library:
https://eigen.tuxfamily.org/dox/group__IterativeLinearSolvers__Module.html
one can find the statement:
"ConjugateGradient for selfadjoint (hermitian) matrices"
This implies that it should also work for Hermitian (complex, not purely real) matrices. Is that the case?
A minimal example shows that it actually doesn't work naively with Hermitian matrices. Is there a trick that one needs to know or is this an error in the description?
My minimal example uses the spin 3/2 matrices Sx (real symmetric) and Sy (complex Hermitian), whose Eigenvalues are know to be -1.5,-0.5,0.5,1.5.
The results for the real symmetric case are fine but in the complex case it results in a NaN.
#include <iostream>
#include <complex>
#include <Eigen/Core>
#include <Eigen/IterativeLinearSolvers>
int main(int args, char **argv){
Eigen::VectorXcd b=Eigen::VectorXcd::Ones(4);
Eigen::VectorXcd x;
std::complex<double> i_unit(0,1);
//Hermitian matrix:
Eigen::MatrixXcd A(4,4);
A<<0,-i_unit*sqrt(3.)/2., 0 ,0, \
i_unit*sqrt(3.)/2., 0 ,-i_unit, 0,\
0,i_unit,0,-i_unit*sqrt(3.)/2.,\
0,0,i_unit*sqrt(3.)/2.,0;
//Real symmetric matrix:
Eigen::MatrixXcd B(4,4);
B<<0,sqrt(3.)/2., 0 ,0, \
sqrt(3.)/2., 0 ,1, 0,\
0,1,0,sqrt(3.)/2.,\
0,0,sqrt(3.)/2.,0;
Eigen::ConjugateGradient< Eigen::MatrixXcd, Eigen::Lower|Eigen::Upper> cg;
cg.compute(A);
x = cg.solve(b);
std::cout<<"Hermitian matrix:"<<std::endl;
std::cout<<"A: "<<std::endl<<A<<std::endl;
std::cout<<"b: "<<std::endl<<b<<std::endl;
std::cout<<"x: "<<std::endl<<x<<std::endl;
std::cout<<"(b-A*x).norm(): "<<(b-A*x).norm()<<std::endl;
std::cout<<"cg.error(): "<<cg.error()<<std::endl;
std::cout<<std::endl;
cg.compute(B);
x = cg.solve(b);
std::cout<<"Real symmetric matrix:"<<std::endl;
std::cout<<"B: "<<std::endl<<B<<std::endl;
std::cout<<"b: "<<std::endl<<b<<std::endl;
std::cout<<"x: "<<std::endl<<x<<std::endl;
std::cout<<"(b-B*x).norm(): "<<(b-B*x).norm()<<std::endl;
std::cout<<"cg.error(): "<<cg.error()<<std::endl;
std::cout<<std::endl;
return 0;
}
Hermitian is not enough, it also needs to be positive definite which is not your case since your matrix has both positive and negative eigenvalues. Anyways, CG is rather designed for handling very large sparse matrices, for a 4x4 matrix better use a dense decomposition. In your case, LDLT will do well.
I have some positive constant value that comes from a different library than mine, call it the_val. Now, I want log_of_the_val to be floor(log_2(the_val)) - not speaking in C++ code - and I want that to happen at compile time, of course.
Now, with gcc, I could do something like
decltype(the_val) log_of_the_val = sizeof(the_val) * CHAR_BIT - __builtin_clz(the_val) - 1;
and that should work, I think (length - number of heading zeros). Otherwise, I could implement a constexpr function myself for it, but I'm betting that there's something else, and simpler, and portable, that I could use at compile-time. ... question is, what would that be?
The most straightforward solution is to use std::log2 from <cmath>, but that isn't specified to be constexpr - it is under gcc, but not under clang. (Actually, libstdc++ std::log2 calls __builtin_log2, which is constexpr under gcc.)
__builtin_clz is constexpr under both gcc and clang, so you may want to use that.
The fully portable solution is to write a recursive constexpr integral log2:
constexpr unsigned cilog2(unsigned val) { return val ? 1 + cilog2(val >> 1) : -1; }
I have to matrices and would like to treat them as a 1-D list and do a dot product. I the following, but it is not working:
Eigen::MatrixXf a(9,9), b(9,9);
float r = a.array().dot(b.array());
What would be the best way to do it?
Computing the coefficient-wise product of 2 matrices is a common pattern, so Eigen provides the cwiseProduct() method to write it elegantly. This would lead to the following expression:
float r = a.cwiseProduct(b).sum();
Try this. :)
Eigen::MatrixXf a(9, 9), b(9, 9);
Eigen::Map<Eigen::VectorXf> aVector(a.data(), 81);
Eigen::Map<Eigen::VectorXf> bVector(b.data(), 81);
float squareError = aVector.dot(bVector);
Here is documentation about Map.
Actually I found it out:
float r = (a.array()*b.array()).sum();
Is there any possible way to make pseudorandom numbers without any binary operators? Being that this is a 3D map, I'm trying to make it as a function of X and Y but hopefully include a randomseed somewhere in their so it won't be the same every time. I know you can make a noise function like this with binary operators :
double PerlinNoise::Noise(int x, int y) const
{
int n = x + y * 57;
n = (n << 13) ^ n;
int t = (n * (n * n * 15731 + 789221) + 1376312589) & 0x7fffffff;
return 1.0 - double(t) * 0.931322574615478515625e-9;/// 1073741824.0);
}
But being that I'm using lua instead of C++, I can't use any binary operators. I've tried many different things yet none of them work. Help?
For bit operators (I guess that is what you mean by "binary"), have a look at Bitwise Operators Wiki page, which contains a list of modules you can use, like Lua BitOp and bitlib.
If you do not want to implement it by yourself, have a look at the module lua-noise, which contains an implementation of Perlin noise. Note that it is a work-in-progress C module.
If I'm not mistaken, Matt Zucker's FAQ on Perlin noise only uses arithmetic operators to describe/implement it. It only mentions bitwise operators as an optimization trick.
You should implement both ways and test them with the same language/runtime, to get an idea of the speed difference.
In the above routine, there are not any bit-wise operators that aren't easily converted to arithmetic operations.
The << 13 becomes * 8192
The & 0x7FFFFFFF becomes a mod of 2^31.
As long as overflow isn't an issue, this should be all you need.
It'd be pretty slow, but you could simulate these with division and multiplication, I believe.
What is wrong with the code snippet below that VS2010 wouldn't compile it?
int m = sqrt( n );
( I am trying to ascertain whether an integer is prime... )
You need to pass a specific floating point type to sqrt - there's no integer overload. Use e.g:
long double m = sqrt(static_cast<long double>(n));
As you include cmath not math.h I'm assuming you want c++. For C, you'll need to use e.g:
double m = sqrt((double) n);
The error you get simply means that the compiler cannot automatically select a sqrt function for you - the integer you pass needs to be converted to a floating point type, and the compiler doesn't know which floating point type and sqrt function it should select.