Eigen dynamic matrix initialization - eigen

I am having some trouble figuring out how to set the rows and columns of a MatrixXd at runtime in Eigen. Can anyone point me to some documentation or give some pointer on how to do this?
Thanks.

You can define a MatrixXd size at runtime by using the method resize(nrow, ncol). You can read more about resizing a dynamic matrix in this link and its API definition here.

Related

Mapping complex sparse matrix in Eigen from MATLAB workspace

I am working on solving the linear algebraic equation Ax = b by using Eigen solvers through mex function of Matlab. Given a complex sparse matrix A and a sparse vector b from Matlab workspace, I want to map matrix A and vector b in Eigen sparse matrix format. After that, I need to use Eigen's linear equation solvers to solve it. At the end I need to transfer the results x to Matlab workspace.
However, since I am not good at C++ and not familiar with Eigen either. I am stuck at the first step, namely constructing the complex sparse matrix in Eigen accepted format.
I have found there is the following function in Eigen,
Eigen::MappedSparseMatrix<double,RowMajor> mat(rows, cols, nnz, row_ptr, col_index, values);
And I can use mxGetPr, mxGetPi, mxGetIr, mxGetJc, etc, these mex functions to get the info for the above "rows, cols, nnz, row_ptr, col_index, values". However, since in my case, matrix A is a complex sparse matrix, I am not sure whether "MappedSparseMatrix" can do that.
If it can, how the format of "MappedSparseMatrix" should be ? Is the following correct ?
Eigen::MappedSparseMatrix<std::complex<double>> mat(rows, cols, nnz, row_ptr, col_index, values_complex);
If so, how should I construct that values_complex ?
I have found about a relevant topic before. I can use the following codes to get a complex dense matrix.
MatrixXcd mat(m,n);
mat.real() = Map<MatrixXd>(realData,m,n);
mat.imag() = Map<MatrixXd>(imagData,m,n);
However, since my matrix A is a sparse matrix, it seems that it will produce errors if I define mat as a complex sparse matrix like the following:
SparseMatrix<std::complex<double> > mat;
mat.real() = Map<SparseMatrix>(rows, cols, nnz, row_ptr, col_index, realData);
mat.imag() = Map<SparseMatrix>(rows, cols, nnz, row_ptr, col_index, imagData);
So can anyone provide some advice for that?
MatlLab stores complex entries in two separate buffers: one for the real components and one for the imaginary components, whereas Eigen needs them to be interleaved:
value_ptr = [r0,i0,r1,i1,r2,i2,...]
so that it is compatible with std::complex<>. So in your case, you will have to create yourself a temporary buffer holding the values in that interleaved format to be passed to MappedSparseMatrix, or, if using Eigen 3.3, to Map<SparseMatrix<double,RowMajor> >.
Moreover, you will have to adjust the buffer of indices so that they are zero-based. To this end, decrement by one all entries of col_ptr and row_ptr before passing them to Eigen, and increment them by one afterward.

How to work with multidimensional data in Halide

I started working with Halide. I know it is explicitly an Image Processing framework, but is there a way to handle multidimensional array ( > 3D ) in it without doing any complex steps like Dimensionality Reduction or Separating the mathematical equations in lower dimensional spaces?
Thanks,
Karnajit
Currently, I believe there is an absolute limitation of 4 dimensions for Buffer: https://github.com/halide/Halide/blob/master/src/Buffer.cpp#L149
Then there seem to be further limitations depending on the target backend:
https://github.com/halide/Halide#limitations

CGAL ConvexHull and Eigen

How can I use my own my own data with CGAL for constructing the convex hull. Especially I would like to use an Eigen3 type and somehow wrap it that CGAL can directly use it, without copying all Eigen3 Vector2d into the CGAL Point_2 class ?
The eigen types all have member function .x() , .y(), .z()
Can somebody give an introduction how to achieve this, the Kernel Extension tutorial is so hard to understand....
Update
I came up so far with a custom iterator which stores a reference to a Eigen::Matrix (pointer or Eigen::Ref class ) and iterates over the columns which are 2x1 vectors. thats only one part of the puzzle: Secondly, I managed to simply typedef Point_2 as Eigen::Vector2d and use the kernel extension tutorial (see above), but I still did not figure out how to put together the whole puzzle? (I post the code tomorrow)

Algorithm and code in SCILAB for row reduced echelon form

I am a novice learner of SCILAB, and I know that there is a pre-defined function rref to produce the row reduced echelon form. I am looking for an algorithm for transforming a m x n matrix into row reduced echelon form and normal form and hence find the rank of a matrix.
Can you please help? Also, we have rref as a pre-defined function in SCILAB, how can we get the scilab code for it? How to find out the code/ algorithm behind any function in SCILAB?
Thanks for your help.
Help about functions
The help pages of Scilab always provide some information and short examples. You can also look at the help online (rref help).
The examples are without output, but demonstrate the various uses. A good first approach is to copy-paste the complete example code into a new scinotes window, save it and press F5 to see what it does. Then modify or extend the code to suite your wanted behavior.
rref & rank
Aren't you looking for the rank function instead? Here an example of using both.
A = [1,2,3;4,5,6;1,2,3]
rref(A);
rank(A);
B = [1,2,3;7,5,6;0,8,7];
rref(B);
rank(B);
Source code
Since Scilab is open source you can find the source code on their git repository, for instance the rref implementation is here.

Library function capabilities of Mathematica

I am trying to use CUSP as an external linear solver for Mathematica to use the power of the GPU.
Here is the CUSP Project webpage. I am asking for some suggestion how we can integrate CUSP with Mathematica. I am sure many of you here will be interested to discuss this. I think writing a input matrix and then feeding it to CUSP program is not the way to go. Using Mathematica's LibrarayFunctionLoad will be a better way to pipeline the input matrix to the GPU based solver on the fly. What will be the way to supply the matrix and the right hand side matrix directly from Mathematica?
Here is some CUSP code snippet.
#include <cusp/hyb_matrix.h>
#include <cusp/io/matrix_market.h>
#include <cusp/krylov/cg.h>
int main(void)
{
// create an empty sparse matrix structure (HYB format)
cusp::hyb_matrix<int, float, cusp::device_memory> A;
// load a matrix stored in MatrixMarket format
cusp::io::read_matrix_market_file(A, "5pt_10x10.mtx");
// allocate storage for solution (x) and right hand side (b)
cusp::array1d<float, cusp::device_memory> x(A.num_rows, 0);
cusp::array1d<float, cusp::device_memory> b(A.num_rows, 1);
// solve the linear system A * x = b with the Conjugate Gradient method
cusp::krylov::cg(A, x, b);
return 0;
}
This question gives us the possibility to discuss compilation capabilities of Mathematica 8. It is also possible to invoke the topic of mathlink interface of MMA. I hope people here find this problem worthy and interesting enough to ponder on.
BR
If you want to use LibraryLink (for which LibraryFunctionLoad is used to access a dynamic library function as a Mathematica downvalue) there's actually not much room for discussion, LibraryFunctions can receive Mathematica tensors of machine doubles or machine integers and you're done.
The Mathematica MTensor format is a dense array, just as you'd naturally use in C, so if CUSP uses some other format you will need to write some glue code to translate between representations.
Refer to the LibraryLink tutorial for full details.
You will want to especially note the section "Memory Management of MTensors" in the Interaction with Mathematica page, and choose the "Shared" mode to just pass a Mathematica tensor by reference.

Resources