I have a data array (double *) in memory which looks like:
[x0,y0,z0,junk,x1,y1,z1,junk,...]
I would like to map it to an Eigen vector and virtually remove the junk values by doing something like:
Eigen::Map<
Eigen::Matrix<double, Eigen::Dynamic, 1, Eigen::ColMajor>,
Eigen::Unaligned,
Eigen::OuterStride<4>
>
But it does not work because the outerstride seems to be restricted to 2D matrices.
Is there a trick to do what I want?
Many thanks!
With the head of Eigen, you can map it as a 2D matrix and then view it as a 1D vector:
auto m1 = Matrix<double,3,Dynamic>::Map(ptr, 3, n, OuterStride<4>());
auto v = m1.reshaped(); // new in future Eigen 3.4
But be aware accesses to such a v involve costly integer division/modulo.
If you want a solution compatible with Eigen 3.3, you can do something like this
VectorXd convert(double const* ptr, Index n)
{
VectorXd res(n*3);
Matrix3Xd::Map(res.data(), 3, n) = Matrix4Xd::Map(ptr, 4, n).topRows<3>();
return res;
}
But this of course would copy the data, which you probably intended to avoid.
Alternatively, you should think about whether it is possible to access your data as a 3xN array/matrix instead of a flat vector (really depends on what you are actually doing).
Related
There is a static assert in Eigen:
EIGEN_STATIC_ASSERT((EIGEN_IMPLIES(MaxRowsAtCompileTime==1 && MaxColsAtCompileTime!=1, (Options&RowMajor)==RowMajor)
which prevents from having column major fixed size matrices with 1 row:
Eigen::Matrix<double, 1, 3, Eigen::ColMajor> m;
I don't really understand why Eigen does not let the user do that (which by the way is quite annoying in my code design)... What is the point of the major order for a 1D array?
Thanks!!
I am working on some matrices related problems in c++. I want to solve the problem: Y = aX + Y, where X and Y are matrices and a is a constant. I thought about using the daxpy BLAS routine, however, DAXPY according to the documentation is a vectors routine and I am not getting the same results as when I solve the same problem in matlab.
I am currently running this:
F77NAME(daxpy)(N, a, X, 1, Y, 1);
When you need to perform operation Y=a*X+Y it does not matter if X',Y` are 1D or 2D matrices, since the operation is done element-wise.
So, If you allocated the matrices in single pointers double A[] = new[] (M*N);, then you can use daxpy by defining the dimension of the vector as M*N
int MN = M*N;
int one = 1;
F77NAME(daxpy)(&MN, &a, &X, &one, &Y, &one);
Same goes with stack two dimension matrix double A[3][2]; as this memory is allocated in sequence.
Otherwise, you need to use a for loop and add each row separately.
Is there a more elegant solution than to copy the values point to point?!
Something like this works for a 1D vector...
vector<float> vec(mat.data(), mat.data() + mat.rows() * mat.cols());
I tried various other alternatives that were suggested by the GCC compiler for vector< vector > but nothing worked out...
Eigen::MatrixXf uses efficient linear memory, while a vector of vector would represent a very different datatype.
For multidimentional vector, you would therefore have to read the matrix block by block and copy those values to the outmost vectors.
Another way would be to copy the values to a vector based class with specific accessors ... but that would end up reconstructing a Matrix like class.
Why do you want to do that ? What kind of access are you trying to provide ? Maybe you should rather try to do similar access using the eigen::matrix interface
Conversion
Eigen::MatrixXf m(2,3);
std::vector<std::vector<T>> v;
for (int i=0; i<m.rows(); ++i)
{
const float* begin = &m.row(i).data()[0];
v.push_back(std::vector<float>(begin, begin+m.cols()));
}
I have a large genetic dataset (X, Y coordinates), of which I can easily know one dimension (X) during runtime.
I drafted the following code for a matrix class which allows to specify the size of one dimension, but leaves the other one dynamic by implementing std::vector. Each vector is new'd using unique_ptr, which is embedded in a C-style array, also with new and unique_ptr.
class Matrix
{
private:
typedef std::vector<Genotype> GenVec;
typedef std::unique_ptr<GenVec> upGenVec;
std::unique_ptr<upGenVec[]> m;
unsigned long size_;
public:
// ...
// construct
Matrix(unsigned long _size): m(new upGenVec[_size]), size_(_size)
{
for (unsigned long i = 0; i < this->size_; ++i)
this->m[i] = upGenVec(new GenVec);
}
};
My question:
Does it make sense to use this instead of std::vector< std::vector<Genotype> > ?
My reasoning behind this implementation is that I only require one dimension to be dynamic, while the other should be fixed. Using std::vectors could imply more memory allocation than needed. As I am working with data that would fill up estimated ~50GB of RAM, I would like to control memory allocation as much as I can.
Or, are there better solutions?
I won't cite any paragraphs from specification, but I'm pretty sure that std::vector memory overhead is fixed, i.e. it doesn't depend on number of elements it contains. So I'd say your solution with C-style array is actually worse memory-wise, because what you allocate, excluding actual data, is:
N * pointer_size (first dimension array)
N * vector_fixed_size (second dimension vectors)
In vector<vector<...>> solution what you allocate is:
1 * vector_fixed_size (first dimension vector)
N * vector_fixed_size (second dimension vectors)
I'm playing around with the Accelerate framework for the first time with the goal of implementing some vectorized code into an iOS application. I've never tried to do anything with respect to working with vectors in Objective C or C. Having some experience with MATLAB, I wonder if using Accelerate is indeed that much more of a pain. Suppose I'd want to calculate the following:
b = 4*(sin(a/2))^2 where a and b are vectors.
MATLAB code:
a = 1:4;
b = 4*(sin(a/2)).^2;
However, as I see it after some spitting through the documentation, things are quite different using Accelerate.
My C implementation:
float a[4] = {1,2,3,4}; //define a
int len = 4;
float div = 2; //define 2
float a2[len]; //define intermediate result 1
vDSP_vsdiv(a, 1, &div, a2, 1, len); //divide
float sinResult[len]; //define intermediate result 2
vvsinf(sinResult, a2, &len); //take sine
float sqResult[len]; //square the result
vDSP_vsq(sinResult, 1, sqResult, 1, len); //take square
float factor = 4; //multiply all this by four
float b[len]; //define answer vector
vDSP_vsmul(sqResult, 1, &factor, b, 1, len); //multiply
//unset all variables I didn't actually need
Honestly, I don't know what's worst here: keeping track of all intermediate steps, trying to memorize how the arguments are passed in vDSP with respect to VecLib (quite different), or that it takes so much time doing something quite trivial.
I really hope I am missing something here and that most steps can be merged or shortened. Any recommendations on coding resources, good coding habits (learned the hard way or from a book), etc. would be very welcome! How do you all deal with multiple lines of vector calculations?
I guess you could write it that way, but it seems awfully complicated to me. I like this better (intel-specific, but can easily be abstracted for other architectures):
#include <Accelerate/Accelerate.h>
#include <immintrin.h>
const __m128 a = {1,2,3,4};
const __m128 sina2 = vsinf(a*_mm_set1_ps(0.5));
const __m128 b = _mm_set1_ps(4)*sina2*sina2;
Also, just to be pedantic, what you're doing here is not linear algebra. Linear algebra involves only linear operations (no squaring, no transcendental operations like sin).
Edit: as you noted, the above won't quite work out of the box on iOS; the biggest issue is that there is no vsinf (vMathLib is not available in Accelerate on iOS). I don't have the SDK installed on my machine to test, but I believe that something like the following should work:
#include <Accelerate/Accelerate.h>
const vFloat a = {1, 2, 3, 4};
const vFloat a2 = a*(vFloat){0.5,0.5,0.5,0.5};
const int n = 4;
vFloat sina2;
vvsinf((float *)&sina2, (const float *)&a, &n);
const vFloat b = sina2*sina2*(vFloat){4,4,4,4};
Not quite as pretty as what is possible with vMathLib, but still fairly compact.
In general, a lot of basic arithmetic operations on vectors just work; there's no need to use calls to any library, which is why Accelerate doesn't go out of its way to supply those operations cleanly. Instead, Accelerate usually tries to provide operations that aren't immediately available by other means.
To answer my own question:
In iOS 6, vMathLib will be introduced. As Stephen clarified, vMathLib could already be used on OSX, but it was not available in iOS. Until now.
The functions that vMathLib provides will allow for easier vector calculations.