Reduced efficiency of open MP over time - parallel-processing

I have a large calculation algorithm where I use open mp to help speed it up. It works fine for the first 50 or so iterations (i.e. until y = 50 or so), but then starts to slow down progressively. I also notice that the CPU usage goes from ~100% to ~40% by the end.
The code looks something like this:
#include <iostream>
#include <omp.h>
#include <ipp.h>
void main(){
std::string filename = "Large_File.file";
FILE * fid = fopen(filename.c_str(), "rb");
Ipp32f* vector= ippsMalloc_32f(100000000);
for (int y=0; y<300; y++){
fread(vector,sizeof(float),100000000,fid);
#pragma omp parallel for
for (int x=0; x<300; x++){
//A time-consuming calculation
}
}
}

First of all, check if this exactly the same question and the same answer as given stackoverflow link: Why does moving the buffer pointer slow down fread (C programming language)?
(It's not very far from what Hristo suggested)
Secondly, from the way you stated your question and "common sense" it is most likely that the slowdown is driven by "fread" call (as an only thing to vary with "y" increase from what could be seen in your "restricted" example).
Third side comment: since you use IPP, you are likely to use fresh (likely Intel) Compiler. If so, I would suggest using more novel ways of allocating memory in aligned manner: either _aligned_malloc() or using #include <aligned_new> read more
(I assume you expect to hear more about OpenMP-specific slow-downs but it's pretty low probability that there is any relationship wrt threading in your case; you may possibly verify it by disabling OpenMP and comparing y=50, y=100, y=150, .. runs, although it will not prove a lot in case you really deal with sophisticated NUMA/threading composability issues which again I don't beleive is the case)

Related

Problem with maximum index value of a vector [duplicate]

Why does the following work? I thought that writing to an an index of a vector object beyond the end of the vector object would cause a segmentation fault.
#include <iostream>
#include <vector>
using namespace std;
int main() {
vector<int> x(1);
x[10] = 1;
cout << x[10] << endl;
}
What are the implications of this? Is there a safer way to initialize a vector of exactly n elements and write only to those? Should I always use push_back()?
Somebody implementing std::vector might easily decide to give it a minimum size of 10 or 20 elements or so, on the theory that the memory manager likely has a large enough minimum allocation size that it will use (about) the same amount of memory anyway.
As far as avoiding reading/writing past the end, one possibility is to avoid using indexing whenever possible, and using .at() to do indexing when you truly can't avoid it.
I find that I can usually avoid doing indexing at all by using range-based for loops and/or standard algorithms for most tasks. For a trivial example, consider something like this:
int main() {
vector<int> x(1);
x.push_back(1);
for (auto i : x)
cout << i << "\n";
}
.at() does work, but I rarely find it useful or necessary--I suspect I use it less than once a year on average.
So under the covers what actually happens when you try to address an element of an array or vector that is outside the container's bounds is a memory dereference to a piece of memory that is not part of the container. However, reads and writes to this location can "work" or appear to work because it is just more memory that you are reading/writing to. You are doing something very very bad. You will generally see random junk when accessing memory outside of your bounds because it can belong to something else or is leftovers from a previous process because the memory controller will not naturally zero out memory on its own. So the best practice is to use the size of the container to check your reads and writes if you are ever in doubt as to if you are outside the bounds of the container. You can use vector.size() to find the current size of the container.

Max size in array 2-dimentional in C++

I want to execute large computational program in 3 and 2 dimension with size of array[40000][40000] or more ,this code can explain my problem a bit,I comment vector because it have same problem when I run it it goes to lib of vector, how to increase memory of compiler or delete(clean) some part of it when program running?
#include<iostream>
#include<cstdlib>
#include<vector>
using namespace std;
int main(){
float array[40000][40000];
//vector< vector<double> > array(1000,1000);
cout<<"bingo"<<endl;
return 0;
}
A slightly better option than vector (and far better than vector-of-vector1), which like vector, uses dynamic allocation for the contents (and therefore doesn't overflow the stack), but doesn't invite resizing:
std::unique_ptr<float[][40000]> array{ new float[40000][40000] };
Conveniently, float[40000][40000] still appears, making it fairly obvious what is going on here even to a programmer unfamiliar with incomplete array types.
1 vector<vector<T> > is very bad, since it would have many different allocations, which all have to be separately initialized, and the resulting storage would be discontiguous. Slightly better is a combination of vector<T> with vector<T*>, with the latter storing pointers created one row apart into a single large buffer managed by the former.

Open mp parallel for does not work

I'm studying OpenMP now, and I have a question. The work time of the following code and the same code without a parallel section is statistically equal, though all threads are accessing the function. I tried to look at some guides in the internet, but it did not help. So the question is, what is wrong with this parallel section?
int sumArrayParallel( )
{
int i = 0;
int sum = 0;
#pragma omp parallel for
for (i = 0; i < arraySize; ++i)
{
cout << omp_get_thread_num() << " ";
sum += testArray[i];
}
return sum;
}
There are two very common causes of OpenMP codes failing to exhibit improved performance over their serial counterparts:
The work being done is not sufficient to outweigh the overhead of parallel computation. Think of there being a cost, in time, for setting up a team of threads, for distributing work to them, for gathering results from them. Unless this cost is less than the time saved by parallelising the computation an OpenMP code, even if correct, will not show any speed up and may show the opposite. You haven't shown us the numbers so do the calculations on this yourself.
The programmer imposes serial operation on the parallel program, perhaps by wrapping data access inside memory fences, perhaps by accessing platform resources which are inherently serial. I suspect (but my knowledge of C is lousy) that your writing to cout may inadvertently serialise that part of your computation.
Of course, you can have a mixture of these two problems, too much serialisation and not enough work, resulting in disappointing performance.
For further reading this page on Intel's website is useful, and not just for beginners.
I think, though, that you have a more serious problem with your code than its poor parallel performance. Does the OpenMP version produce the correct sum ? Since you have made no specific provision sum is shared by all threads and they will race for access to it. While learning OpenMP it is a very good idea to attach the clause default(none) to your parallel regions and to take responsibility for defining the shared/private status of each variable in each region. Then, once you are fluent in OpenMP you will know why it makes sense to continue to use the default(none) clause.
Even if you reply Yes, the code does produce the correct result the data race exists and your program can't be trusted. Data races are funny like that, they don't show up in all the tests you run then, once you roll-out your code into production, bang ! and egg all over your face.
However, you seem to be rolling your own reduction and OpenMP provides the tools for doing this. Investigate the reduction clause in your OpenMP references. If I read your code correctly, and taking into account the advice above, you could rewrite the loop to
#pragma omp parallel for default(none) shared(sum, arraySize, testArray) private(i) reduction(+:sum)
for (i = 0; i < arraySize; ++i)
{
sum += testArray[i];
}
In a nutshell, using the reduction clause tells OpenMP to sort out the problems of summing a single value from work distributed across threads, avoiding race conditions etc.
Since OpenMP makes loop iteration variables private by default you could omit the clause private(i) from the directive without too much risk. Even better though might be to declare it inside the for statement:
#pragma omp parallel for default(none) shared(sum, arraySize, testArray) reduction(+:sum)
for (int i = 0; i < arraySize; ++i)
variables declared inside parallel regions are (leaving aside some special cases) always private.

openCL reduction, and passing 2d array

Here is the loop I want to convert to openCL.
for(n=0; n < LargeNumber; ++n) {
for (n2=0; n2< SmallNumber; ++n2) {
A[n]+=B[n2][n];
}
Re+=A[n];
}
And here is what I have so far, although, I know it is not correct and missing some things.
__kernel void openCL_Kernel( __global int *A,
__global int **B,
__global int *C,
__global _int64 Re,
int D)
{
int i=get_global_id(0);
int ii=get_global_id(1);
A[i]+=B[ii][i];
//barrier(..); ?
Re+=A[i];
}
I'm a complete beginner to this type of thing. First of all I know that I can't pass a global double pointer to an openCL kernel. If you can, wait a few days or so before posting the solution, I want to figure this out for myself, but if you can help point me in the right direction I would be grateful.
Concerning your problem with passing doublepointers: That kind of problem is typically solved by copying the whole matrix (or whatever you are working on) into one continous block of memory and, if the blocks have different lengths passing another array, which contains the offsets for the individual rows ( so your access would look something like B[index[ii]+i]).
Now for your reduction down to Re: since you didn't mention what kind of device you are working on I'm going to assume its GPU. In that case I would avoid doing the reduction in the same kernel, since its going to be slow as hell the way you posted it (you would have to serialize the access to Re over thousands of threads (and the access to A[i] too).
Instead I would write want kernel, which sums all B[*][i] into A[i] and put the reduction from A into Re in another kernel and do it in several steps, that is you use a reduction kernel which operates on n element and reduces them to something like n / 16 (or any other number). Then you iteratively call that kernel until you are down to one element, which is your result (I'm making this description intentionally vague, since you said you wanted to figure thinks out yourself).
As a sidenote: You realize that the original code doesn't exactly have a nice memory access pattern? Assuming B is relatively large (and much larger then A due to the second dimension) having the inner loop iterate over the outer index is going to create a lot of cachemisses. This is even worse when porting to the gpu, which is very sensitive about coherent memory access
So reordering it like this may massively increase performance:
for (n2=0; n2< SmallNumber; ++n2)
for(n=0; n < LargeNumber; ++n)
A[n]+=B[n2][n];
for(n=0; n < LargeNumber; ++n)
Re+=A[n];
This is particulary true if you have a compiler which is good at autovectorization, since it might be able to vectorize that construct, but it's very unlikely to be able to do so for the original code (and if it can't prove that A and B[n2] can't refer to the same memory it can't turn the original code into this).

Does it change performance to use a non-int counter in a loop?

I'm just curious and can't find the answer anywhere. Usually, we use an integer for a counter in a loop, e.g. in C/C++:
for (int i=0; i<100; ++i)
But we can also use a short integer or even a char. My question is: Does it change the performance? It's a few bytes less so the memory savings are negligible. It just intrigues me if I do any harm by using a char if I know that the counter won't exceed 100.
Probably using the "natural" integer size for the platform will provide the best performance. In C++ this is usually int. However, the difference is likely to be small and you are unlikely to find that this is the performance bottleneck.
Depends on the architecture. On the PowerPC, there's usually a massive performance penalty involved in using anything other than int (or whatever the native word size is) -- eg, don't use short or char. Float is right out, too.
You should time this on your particular architecture because it varies, but in my test cases there was ~20% slowdown from using short instead of int.
I can't provide a citation, but I've heard that you often do incur a little performance overhead by using a short or char.
The memory savings are nonexistant since it's a temporary stack variable. The memory it lives in will almost certainly already be allocated, and you probably won't save anything by using something shorter because the next variable will likely want to be aligned to a larger boundary anyway.
You can use whatever legal type you want in a for; it doesn't have to be integral or even built in. For example, you can use iterators as well:
for( std::vector<std::string>::iterator s = myStrings.begin(); myStrings.end() != s; ++s )
{
...
}
Whether or not it will have an impact on performance comes down to a question of how the operators you use are implemented. So in the above example that means end(), operator!=() and operator++().
This is not really an answer. I'm just exploring what Crashworks said about the PowerPC. As others have pointed out already, using a type that maps to the native word size should yield the shortest code and the best performance.
$ cat loop.c
extern void bar();
void foo()
{
int i;
for (i = 0; i < 42; ++i)
bar();
}
$ powerpc-eabi-gcc -S -O3 -o - loop.c
.
.
.L5:
bl bar
addic. 31,31,-1
bge+ 0,.L5
It is quite different with short i, instead of int i, and looks like won't perform as well either.
.L5:
bl bar
addi 3,31,1
extsh 31,3
cmpwi 7,31,41
ble+ 7,.L5
No, it really shouldn't impact performance.
It probably would have been quicker to type in a quick program (you did the most complex line already) and profile it, than ask this question here. :-)
FWIW, in languages that use bignums by default (Python, Lisp, etc.), I've never seen a profile where a loop counter was the bottleneck. Checking the type tag is not that expensive -- a couple instructions at most -- but probably bigger than the difference between a (fix)int and a short int.
Probably not as long as you don't do it with a float or a double. Since memory is cheap you would probably be best off just using an int.
An unsigned or size_t should, in theory, give you better results ( wow, easy people, we are trying to optimise for evil, and against those shouting 'premature' nonsense. It's the new trend ).
However, it does have its drawbacks, primarily the classic one: screw-up.
Google devs seems to avoid it to but it is pita to fight against std or boost.
If you compile your program with optimization (e.g., gcc -O), it doesn't matter. The compiler will allocate an integer register to the value and never store it in memory or on the stack. If your loop calls a routine, gcc will allocate one of the variables r14-r31 which any called routine will save and restore. So use int, because that causes the least surprise to whomever reads your code.

Resources