how many parameters can a vector have? - c++11

Until now I had only 1 thing in mind
vector<int> vec(100) where 100 is the size of the vector.
But recently I saw like this
vector<float> angle (3600, 0.) and I don't understand what is the second parameter for?
Is it possible to have even more parameters for a vector in C++?

Here in your case two parameters are like this,
1. size of the vector
2. value(s) to filled up
So in your case your vector angle is contains 0 for 3600 times.

The constructor is heavily overloaded to allow for initialisation to a set value and for specialised memory management. Just navigate to the documentation and read the options.

Related

MSE giving negative results in High-Level Synthesis

I am trying to calculate the Mean Squared Error in Vitis HLS. I am using hls::pow(...,2) and divide by n, but all I receive is a negative value for example -0.004. This does not make sense to me. Could anyone point the problem out or have a proper explanation for this??
Besides calculating the mean squared error using hls::pow does not give the same results as (a - b) * (a - b) and for information I am using ap_fixed<> types and not normal float or double precision
Thanks in advance!
It sounds like an overflow and/or underflow issue, meaning that the values reach the sign bit and are interpreted as negative while just be very large.
Have you tried tuning the representation precision or the different saturation/rounding options for the fixed point class? This tuning will depend on the data you're processing.
For example, if you handle data that you know will range between -128.5 and 1023.4, you might need very few fractional bits, say 3 or 4, leaving the rest for the integer part (which might roughly be log2((1023+128)^2)).
Alternatively, if n is very large, you can try a moving average and calculate the mean in small "chunks" of length m < n.
p.s. Getting the absolute value of a - b and store it into an ap_ufixed before the multiplication can already give you one extra bit, but adds an instruction/operation/logic to the algorithm (which might not be a problem if the design is pipelined, but require space if the size of ap_ufixed is very large).

Make a previously unknown number of parallel operations. In VHDL

Im working on a project for which I need to make calculations with vectors (orthogonalizing a matrix using gram schmidt method). The length of this vectors is unknown now, the program must be able to adapt to different lengths. One of such calculations is calculating a new vector (C) which is the result of adding A and B. Each element of the vectors is a number in fixed-point.
I want C(i)=A(i)+B(i). For all the elements of the vector (for i=0 to N, where N is the vector length).
I can find 2 solutions for this but both present some problems:
1- I can declare in the entity, vectors whose length changes according to a generic and then just create a for loop which goes through all the vector.
for I in 0 to N loop
C(I)<=A(I)+B(I);
end loop;
The problem with this solution is that the execution would be sequential, and therefore slow. Im not completly sure about this and I dont know how to check it but I guess that the compiler is not smart enough to notice that it can be processed in parallel. In this application speed is a key factor.
2- I can declare vectors which are as long as the maximum possible length for the actual data and fill them with zeroes. Then I could just assign:
C(0)<=A(0)+B(0);
C(1)<=A(1)+B(1);
C(2)<=A(2)+B(2);
...
C(Nmax)<=A(Nmax)+B(Nmax);
This is not an elegant solution and in this application N can be between 3 and 300 therefore it could be a complete waste and tedious to program.
3- I want to find a third solution which could be able to create a number (asigned by the generic) of combinational calculations following a template such as C(i)=A(i)+B(i). Is there any solution like this? It is actually creating a loop which would not be executed sequentially but instead all at the same time.
I know that similar stuff can be done using CUDA but this project is actually a comparison between GPUs and FPGAs, so changing the platform is not a suitable solution either.
Thank you in advance
Edit: I have tought of another unsatisfactory solution but I want to share it in case it is helpful for somebody else checking this in the future. Given that A and B have the same length, you can write them in a 1-D format, that is: A(normal)=[1001,1100,0011], A(1-D)=100111000011. The same would be done with B.
If you know before hand that the sum of any two possible numbers can be expressed with the same amount of bits, there will be no problems. So with 4 unsigned bits you should make sure that in any possible case the numbers in A or B are !>0111 (not higher than 0111). You could just write C(1-D)=A(1-D)+B(1-D) and then just asign C(0)=C(1-D)(3 downto 0), C(1)=C(1-D)(7 downto 4) etc.
If you cannot make sure that the numbers are not higher than 0111 (in the 4 bit case) it wont work.
You might be able to use the length attribute to create a loop depending on the size of your vector.
https://www.csee.umbc.edu/portal/help/VHDL/attribute.html
As mentioned in the comment to the question the loop should be unrolled as long as it is not synchronized to the clock.

How to check if an Eigen::Matrix4f is close to identity matrix?

Is there a good practice to check if my result Eigen::Matrix4f is almost identity? Since due to floating point errors I don't get some times exactly only zeros and ones.
One brute force method would be, to check each value in the matrix if it is between certain EPSILON and if just one of them fails, then it is not an identity matrix. Is there a better solution?
First, you have to define in what sense they shall be "close". There can be many different definitions of closeness, depending on your specific task. One of the most used is:
norm( A - I ) < eps
where norm is some matrix norm. Most common are 2-norm, 1-norm, inf-norm and Frobenius norm.
Your method is also possible. It is equivalent to the method above with max-norm (where norm(A) = max abs Aij). It can be implemented in Eigen using:
(A - Matrix4f::Identity()).cwiseAbs().max() < eps;
Update:
Actually, in Eigen there is a special method to check that: isIdentity. You give it the threshold value:
A.isIdentity(eps)

How to efficiently store and manipulate sparse binary matrices in Octave?

I'm trying to manipulate sparse binary matrices in GNU Octave, and it's using way more memory than I expect, and relevant sparse-matrix functions don't behave the way I want them to. I see this question about higher-than-expected sparse-matrix storage in MATLAB, which suggests that this matrix should consume even more memory, but helped explain (only) part of this situation.
For a sparse, binary matrix, I can't figure out any way to get Octave to NOT STORE the array of values (they're always implicitly 1, so need not be stored). Can this be done? Octave always seems to consume memory for a values array.
A trimmed-down example demonstrating the situation: create random sparse matrix, turn it into "binary":
mys=spones(sprandn(1024,1024,.03)); nnz(mys), whos mys
Shows the situation. The consumed size is consistent with the storage mechanism outlined in aforementioned SO answer and expanded below, if spones() creates an array of storage-class double and if all indices are 32-bit (i.e., TotalStorageSize - rowIndices - columnIndices == NumNonZero*sizeof(double) -- unnecessarily storing these values (all 1s as doubles) is over half of the total memory consumed by this 3%-sparse object.
After messing with this (for too long) while composing this question, I discovered some partial workarounds, so I'm going to "self-answer" (only) part of the question for continuity (hopefully), but I didn't figure out an adequate answer to main question:
How do I create an efficiently-stored ("no-/implicit-values") binary matrix in Octave?
Additional background on storage format follows...
The Octave docs say the storage format for sparse matrices uses format Compressed Sparse Column (CSC). This seems to imply storing the following arrays (expanding on aforementioned SO answer, with canonical Yale format labels and tweaks for column-major order):
values (A), number-of-nonzeros (NNZ) entries of storage-class size;
row numbers (IA), NNZ entries of index size (hopefully int64 but maybe int32);
start of each column (JA), number-of-columns-plus-1 entries of index size)
In this case, for binary-only storage, I hope there's a way to completely avoid storing array (A), but I can't figure it out.
Full disclosure: As noted above, as I was composing this question, I discovered a workaround to reduce memory usage, so I'm "self-answering" part of this here, but it still isn't fully satisfying, so I'm still listening for a better actual answer to storage of a sparse binary matrix without a trivial, bloated, unnecessary values array...
To get a binary-like value out of a number-like value and reduce the memory usage in this case, use "logical" storage, created by logical(X). For example, building from above,
logicalmys = logical(mys);
creates a sparse bool matrix, that takes up less memory (1-byte logical rather than 8-byte double for the values array).
Adding more information to the whos information using whos_line_format helps illuminate the situation: The default string includes 5 of the 7 properties (see docs for more). I'm using the format string
whos_line_format(" %a:4; %ln:6; %cs:16:6:1; %rb:12; %lc:8; %e:10; %t:20;\n")
to add display of "elements", and "type" (which is distinct from "class").
With that, whos mys logicalmys shows something like
Attr Name Size Bytes Class Elements Type
==== ==== ==== ===== ===== ======== ====
mys 1024x1024 391100 double 32250 sparse matrix
logicalmys 1024x1024 165350 logical 32250 sparse bool matrix
So this shows a distinction between sparse matrix and sparse bool matrix. However, the total memory consumed by logicalmys is consistent with actually storing an array of NNZ booleans (1-byte) -- That is:
totalMemory minus rowIndices minus columnOffsets leaves NNZ bytes left;
in numbers,
165350 - 32250*4 - 1025*4 == 32250.
So we're still storing 32250 elements, all of which are 1. Further, if you set one of the 1-elements to zero, it reduces the reported storage! For a good time, try: pick a nonzero element, e.g., (42,1), then zero it: logicalmys(42,1) = 0; then whos it!
My hope is that this is correct, and that this clarifies some things for those who might be interested. Comments, corrections, or actual answers welcome!

vector --> concurrent_vector migration + OpenGL restriction

I need to speed-up some calculation and result of calculation then used to draw OpenGL model.
Major speed-up archived when I changed std::vector to Concurrency::concurrent_vector and used parallel_for instead of just for loops.
This vector (or concurrent_vector) calculated in for (or parallel_for) loop and contains vertices for OpenGL to visualize.
It is fine using std::vector because OpenGL rendering procedure relies on the fact that std::vector keeps it's items in sequence which is not a case with concurrent_vector. Code runs something like this:
glVertexPointer(3, GL_FLOAT, 0, &vectorWithVerticesData[0]);
To generate concurrent_vector and copy it to std::vector is too expensive since there are lot of items.
So, the question is: I'd like to use OpenGL arrays, but also like to use concurrent_vector which is incompatible with OpenGL output.
Any suggestions?
You're trying to use a data structure that doesn't store its elements contiguously in an API that requires contiguous storage. Well, one of those has to give, and it's not going to be OpenGL. GL isn't going to walk concurrent_vector's data structure (not if you like performance).
So your option is to not use non-sequential objects.
I can only guess at what you're doing (since you didn't provide example code for the generator), so that limits what I can advise. If your parallel_for iterates for a fixed number of times (by "fixed", I mean a value that is known immediately before parallel_for executes. It doesn't change based on how many times you've iterated), then you can just use a regular vector.
Simply size the vector with vector::size. This will value-initialize the elements, which means that every element exists. You can now perform your parallel_for loop, but instead of using push_back or whatever, you simply copy the element directly into its location in the output. I think parallel_for can iterate over the actual vector iterators, but I'm not positive. Either way, it doesn't matter; you won't get any race conditions unless you try to set the same element from different threads.

Resources