I want to execute large computational program in 3 and 2 dimension with size of array[40000][40000] or more ,this code can explain my problem a bit,I comment vector because it have same problem when I run it it goes to lib of vector, how to increase memory of compiler or delete(clean) some part of it when program running?
#include<iostream>
#include<cstdlib>
#include<vector>
using namespace std;
int main(){
float array[40000][40000];
//vector< vector<double> > array(1000,1000);
cout<<"bingo"<<endl;
return 0;
}
A slightly better option than vector (and far better than vector-of-vector1), which like vector, uses dynamic allocation for the contents (and therefore doesn't overflow the stack), but doesn't invite resizing:
std::unique_ptr<float[][40000]> array{ new float[40000][40000] };
Conveniently, float[40000][40000] still appears, making it fairly obvious what is going on here even to a programmer unfamiliar with incomplete array types.
1 vector<vector<T> > is very bad, since it would have many different allocations, which all have to be separately initialized, and the resulting storage would be discontiguous. Slightly better is a combination of vector<T> with vector<T*>, with the latter storing pointers created one row apart into a single large buffer managed by the former.
Related
Why does the following work? I thought that writing to an an index of a vector object beyond the end of the vector object would cause a segmentation fault.
#include <iostream>
#include <vector>
using namespace std;
int main() {
vector<int> x(1);
x[10] = 1;
cout << x[10] << endl;
}
What are the implications of this? Is there a safer way to initialize a vector of exactly n elements and write only to those? Should I always use push_back()?
Somebody implementing std::vector might easily decide to give it a minimum size of 10 or 20 elements or so, on the theory that the memory manager likely has a large enough minimum allocation size that it will use (about) the same amount of memory anyway.
As far as avoiding reading/writing past the end, one possibility is to avoid using indexing whenever possible, and using .at() to do indexing when you truly can't avoid it.
I find that I can usually avoid doing indexing at all by using range-based for loops and/or standard algorithms for most tasks. For a trivial example, consider something like this:
int main() {
vector<int> x(1);
x.push_back(1);
for (auto i : x)
cout << i << "\n";
}
.at() does work, but I rarely find it useful or necessary--I suspect I use it less than once a year on average.
So under the covers what actually happens when you try to address an element of an array or vector that is outside the container's bounds is a memory dereference to a piece of memory that is not part of the container. However, reads and writes to this location can "work" or appear to work because it is just more memory that you are reading/writing to. You are doing something very very bad. You will generally see random junk when accessing memory outside of your bounds because it can belong to something else or is leftovers from a previous process because the memory controller will not naturally zero out memory on its own. So the best practice is to use the size of the container to check your reads and writes if you are ever in doubt as to if you are outside the bounds of the container. You can use vector.size() to find the current size of the container.
So I've recently become familiar with (and fell in love with) boost and c++11 smart pointers. It makes memory management SO much easier. And, on top of all that, they can usually still work with legacy code (through the use of the get call)
However, the big hole I keep running into is multidimensional jagged arrays. The correct way to do it is to have a boost::scoped_array<boost::scoped_array<double>> or vector<vector<double>>, which will clean up nicely. However, you cannot get a double** out of this easily to send to legacy code.
Is there any way to do this, or am I stuck with non-smart jagged arrays?
I'd start with std::vector<std::vector<double>> for storage, unless the structure was highly static.
To produce my array-of-arrays, I'd produce a std::vector<double*> via transformation of my above storage, using syntax like transform_to_vector( storage, []( std::vector<double>& v ) { return v.data(); } ) (transform_to_vector left as an exercise to the reader).
Keeping the two in sync would be a simple matter of wrapping it in a small class.
If the jagged array is relatively fixed in size, I'd take a std::vector<std::size_t> to create my buffer (or maybe a std::initializer_list<std::size_t> -- actually, a template<typename Container>, and I'd just for( : ) over it twice, and let the caller pick what container it provided me), then create a single std::vector<double> with the sum of the sizes, then build a std::vector<double*> at the dictated offsets.
Resizing this gets expensive, which is a disadvantage.
A nice property of using std::vector is that newer APIs have full access to the pretty begin and end values. If you have a single large buffer, you can expose a range view of the sub arrays to new code (a structure containing a double* begin() and double* end(), and while we are at it a double& operator[] and std::size_t size() const { return end()-begin(); }), so they can bask in the glory of full on C++ container-style views while keeping C compatibility for legacy interfaces.
If you're working in C++11, you should probably work with unique_ptr<T[]> rather than scoped_array<T>. It can do everything that scoped_array can, and then some.
If you want a rectangular array, I recommend using a unique_ptr<double[]> to hold the main data and a unique_ptr<double*[]> to hold the row bases. This would work something like this:
unique_ptr<double[]> data{ new double[5*3] };
unique_ptr<double*[]> rows{ new double*[3] };
rows[0] = data.get();
for ( size_t i = 1; i!=5; ++i )
rows[i] = rows[i-1]+3;
Then you can pass rows.get() to a function taking double**. This approach can work for a non-rectangular array as well, provided the geometry of the array is known at array creation time so that you can allocate all the data at once and point rows to the proper offsets. (It may not be as straightforward as a simple loop, though.)
This will also give you better locality of reference and memory usage, since you only perform two allocations. All of your data will be stored together in memory and there won't be additional overhead for the separate allocations.
If you want to change the geometry of the jagged array after creating it, you will need to come up with a principled way of managing the storage for this solution to be applicable. However, since changing the geometry using scoped_array is awkward (requiring specific uses of swap()), I wouldn't be surprised if this isn't an issue for you.
(Note that this approach can work with scoped_array as well as unique_ptr<[]>; I'm simply illustrating it using unique_ptr since we're in C++11 now.)
In an std::vector of a non POD data type, is there a difference between a vector of objects and a vector of (smart) pointers to objects? I mean a difference in the implementation of these data structures by the compiler.
E.g.:
class Test {
std::string s;
Test *other;
};
std::vector<Test> vt;
std::vector<Test*> vpt;
Could be there no performance difference between vt and vpt?
In other words: when I define a vector<Test>, internally will the compiler create a vector<Test*> anyway?
In other words: when I define a vector, internally will the compiler create a vector anyway?
No, this is not allowed by the C++ standard. The following code is legal C++:
vector<Test> vt;
Test t1; t1.s = "1"; t1.other = NULL;
Test t2; t2.s = "1"; t2.other = NULL;
vt.push_back(t1);
vt.push_back(t2);
Test* pt = &vt[0];
pt++;
Test q = *pt; // q now equal to Test(2)
In other words, a vector "decays" to an array (accessing it like a C array is legal), so the compiler effectively has to store the elements internally as an array, and may not just store pointers.
But beware that the array pointer is valid only as long as the vector is not reallocated (which normally only happens when the size grows beyond capacity).
In general, whatever the type being stored in the vector is, instances of that may be copied. This means that if you are storing a std::string, instances of std::string will be copied.
For example, when you push a Type into a vector, the Type instance is copied into a instance housed inside of the vector. The copying of a pointer will be cheap, but, as Konrad Rudolph pointed out in the comments, this should not be the only thing you consider.
For simple objects like your Test, copying is going to be so fast that it will not matter.
Additionally, with C++11, moving allows avoiding creating an extra copy if one is not necessary.
So in short: A pointer will be copied faster, but copying is not the only thing that matters. I would worry about maintainable, logical code first and performance when it becomes a problem (or the situation calls for it).
As for your question about an internal pointer vector, no, vectors are implemented as arrays that are periodically resized when necessary. You can find GNU's libc++ implementation of vector online.
The answer gets a lot more complicated at a lower than C++ level. Pointers will of course have to be involved since an entire program cannot fit into registers. I don't know enough about that low of level to elaborate more though.
Here is the loop I want to convert to openCL.
for(n=0; n < LargeNumber; ++n) {
for (n2=0; n2< SmallNumber; ++n2) {
A[n]+=B[n2][n];
}
Re+=A[n];
}
And here is what I have so far, although, I know it is not correct and missing some things.
__kernel void openCL_Kernel( __global int *A,
__global int **B,
__global int *C,
__global _int64 Re,
int D)
{
int i=get_global_id(0);
int ii=get_global_id(1);
A[i]+=B[ii][i];
//barrier(..); ?
Re+=A[i];
}
I'm a complete beginner to this type of thing. First of all I know that I can't pass a global double pointer to an openCL kernel. If you can, wait a few days or so before posting the solution, I want to figure this out for myself, but if you can help point me in the right direction I would be grateful.
Concerning your problem with passing doublepointers: That kind of problem is typically solved by copying the whole matrix (or whatever you are working on) into one continous block of memory and, if the blocks have different lengths passing another array, which contains the offsets for the individual rows ( so your access would look something like B[index[ii]+i]).
Now for your reduction down to Re: since you didn't mention what kind of device you are working on I'm going to assume its GPU. In that case I would avoid doing the reduction in the same kernel, since its going to be slow as hell the way you posted it (you would have to serialize the access to Re over thousands of threads (and the access to A[i] too).
Instead I would write want kernel, which sums all B[*][i] into A[i] and put the reduction from A into Re in another kernel and do it in several steps, that is you use a reduction kernel which operates on n element and reduces them to something like n / 16 (or any other number). Then you iteratively call that kernel until you are down to one element, which is your result (I'm making this description intentionally vague, since you said you wanted to figure thinks out yourself).
As a sidenote: You realize that the original code doesn't exactly have a nice memory access pattern? Assuming B is relatively large (and much larger then A due to the second dimension) having the inner loop iterate over the outer index is going to create a lot of cachemisses. This is even worse when porting to the gpu, which is very sensitive about coherent memory access
So reordering it like this may massively increase performance:
for (n2=0; n2< SmallNumber; ++n2)
for(n=0; n < LargeNumber; ++n)
A[n]+=B[n2][n];
for(n=0; n < LargeNumber; ++n)
Re+=A[n];
This is particulary true if you have a compiler which is good at autovectorization, since it might be able to vectorize that construct, but it's very unlikely to be able to do so for the original code (and if it can't prove that A and B[n2] can't refer to the same memory it can't turn the original code into this).
I'm trying to add the rows of a 4800x9600 matrix together, resulting in a matrix 1x9600.
What I've done is split the 4800x9600 into 9,600 matrices of length 4800 each. I then perform a reduction on the 4800 elements.
The trouble is, this is really slow...
Anyone got any suggestions?
Basically, I'm trying to implement MATLAB's sum(...) function.
Here is the code which I've verified works fine, it's just it's really slow:
void reduceRows(Matrix Dresult,Matrix DA)
{
//split DA into chunks
Matrix Dchunk;
Dchunk.h=1;Dchunk.w=DA.h;
cudaMalloc((void**)&Dchunk.data,Dchunk.h*Dchunk.w*sizeof(float));
Matrix DcolSum;
DcolSum.h=1;DcolSum.w=1;
//cudaMalloc((void**)&DcolSum.data,DcolSum.h*DcolSum.w*sizeof(float));
int i;
for(i=0;i<DA.w;i++) //loop over each column
{
//printf("%d ",i);
cudaMemcpy(Dchunk.data,&DA.data[i*DA.h],DA.h*sizeof(float),cudaMemcpyDeviceToDevice);
DcolSum.data=&Dresult.data[i];
reduceTotal(DcolSum,Dchunk);
}
cudaFree(Dchunk.data);
}
Matrix is defined as:
typedef struct{
long w;
long h;
float* data;
}Matrix;
ReduceTotal() just calls the standard NVIDIA reduction, sums all the elements in Dchunk and puts the answer in DcolSum.
I'm about to do all this on the CPU if I can't find an answer... ;(
Many thanks in advance,
Instead of looping over each column, parallelize on the columns. Each of 4600 threads sums the 9600 entries in its column, and puts the sum in the appropriate place in the result vector.
If you're looking for a library to make working with Cuda simpler, I highly recommend Thrust: http://code.google.com/p/thrust/
Using Thrust, I would create a functor to hold your matrix's pointer in device memory, and then map it over a sequence of column indices. The operator() of the functor would take an index, sum up everything in that column of the matrix, and return the sum. Then you would have your sum sitting in a thrust::device_vector without any memory copies (or even direct CUDA calls).
Your functor might look something like:
struct ColumnSumFunctor {
const Matrix matrix;
// Make a functor to sum the matrix
ColumnSumFunctor(const Matrix& matrix);
// Compute and return the sum of the specified column
__device__
int operator()(const int& column) const;
};
Reduction is very basic operation in GPGPU, it's supposed to be fast, and 9600 times of reduction shouldn't be slow either.
What graphics card are you using?
I suggest you split it into 9600 arrays, each time you reduce an array of 4800 elements into one result. Instead of reduceTotal, I suggest you use CUDPP to perform the reduction operation, CUDPP is like the STL for CUDA. It's implemented with concern on performance.
http://code.google.com/p/cudpp/
I think your problem is that you are launching 9600X2 kernels. This should be an easy algorithm to express as a single kernel.
The most naive way to implement it would not coalesce memory, but it could well be faster than the way you are doing it now.
Once you've got the naive way working, then coalesce your memory reads: e.g. have every thread in a block read 16 consecutive floats into shared memory, syncthreads, then accumulate the relevant 16 floats into a register, synthreads, then repeat
The Computing SDK has lots of examples of reduction techniques.