Reduction with OpenMP - openmp

I am trying to compute mean of a 2d matrix using openmp. This 2d matrix is actually an image.
I am doing the thread-wise division of data. For example, if I have N threads than I process Rows/N number of rows with thread0, and so on.
My question is: Can I use the openmp reduction clause with "#pragma omp parallel"?
#pragma omp parallel reduction( + : sum )
{
if( thread == 0 )
bla bla code
sum = sum + val;
else if( thread == 1 )
bla bla code
sum = sum + val;
}

Yes, you can - the reduction clause is applicable to the whole parallel region as well as to individual for worksharing constructs. This allows for e.g. reduction over computations done in different parallel sections (the preferred way to restructure the code):
#pragma omp parallel sections private(val) reduction(+:sum)
{
#pragma omp section
{
bla bla code
sum += val;
}
#pragma omp section
{
bla bla code
sum += val;
}
}
You can also use the OpenMP for worksharing construct to automatically distribute the loop iterations among the threads in the team instead of reimplementing it using sections:
#pragma omp parallel for private(val) reduction(+:sum)
for (row = 0; row < Rows; row++)
{
bla bla code
sum += val;
}
Note that reduction variables are private and their intermediate values (i.e. the value they hold before the reduction at the end of the parallel region) are only partial and not very useful. For example the following serial loop cannot be (easily?) transformed to a parallel one with reduction operation:
for (row = 0; row < Rows; row++)
{
bla bla code
sum += val;
if (sum > threshold)
yada yada code
}
Here the yada yada code should be executed in each iteration once the accumulated value of sum has passed the value of threshold. When the loop is run in parallel, the private values of sum might never reach threshold, even if their sum does.

In your case, the sum = sum + val could interpreted as val[i] = val[i-1] + val[i] in 1-d array (or val[rows][cols] = val[rows][cols-1] + val[rows][cols] in 2-d array) which is a prefix sum calculation.
Reduction is one of solution for prefix sum, you can use reduction to any commutative-associative operators like '+', '-', '*', '/'.

Related

Question about OpenMP sections and critical

I am trying to make a fast parallel loop. In each iteration of the loop, I build an array which is costly so I want it distributed over many threads. After the array is built, I use it to update a matrix. Here it gets tricky because the matrix is common to all threads so only 1 thread can modify parts of the matrix at one time, but when I work on the matrix, it turns out I can distribute that work too since I can work on different parts of the matrix at the same time.
Here is what I currently am doing:
#pragma omp parallel for
for (i = 0; i < n; ++i)
{
... build array bi ...
#pragma omp critical
{
update_matrix(A, bi)
}
}
...
subroutine update_matrix(A, b)
{
printf("id0 = %d\n", omp_get_thread_num());
#pragma omp parallel sections
{
#pragma omp section
{
printf("id1 = %d\n", omp_get_thread_num());
modify columns 1 to j of A using b
}
#pragma omp section
{
printf("id2 = %d\n", omp_get_thread_num());
modify columns j+1 to k of A using b
}
}
}
The problem is that the two different sections of the update_matrix() routine are not being parallelized. The output I get looks like this:
id0 = 19
id1 = 0
id2 = 0
id0 = 5
id1 = 0
id2 = 0
...
So the two sections are being executed by the same thread (0). I tried removing the #pragma omp critical in the main loop but it gives the same result. Does anyone know what I'm doing wrong?
#pragma omp parallel sections should not work there because you are already in a parallel part of the code distributed by the #pragma omp prallel for clause. Unless you have enabled nested parallelization with omp_set_nested(1);, the parallel sections clause will be ignored.
Please not that it is not necessarily efficient as spawning new threads has an overhead cost which may not be worth if the update_matrix part is not too CPU intensive.
You have several options:
Forget about that. If the non-critical part of the loop is really what takes most calculations and you already have as many threads as CPUs, spwaning extra threads for a simple operations will do no good. Just remove the parallel sections clause in the subroutine.
Try enable nesting with omp_set_nested(1);
Another option, which comes at the cost of a double synchronization overhead and would be use named critical sections. There may be only one thread in critical section ONE_TO_J and one on critical section J_TO_K so basically up to two threads may update the matrix in parallel. This is costly in term of synchronization overhead.
#pragma omp parallel for
for (i = 0; i < n; ++i)
{
... build array bi ...
update_matrix(A, bi); // not critical
}
...
subroutine update_matrix(A, b)
{
printf("id0 = %d\n", omp_get_thread_num());
#pragma omp critical(ONE_TO_J)
{
printf("id1 = %d\n", omp_get_thread_num());
modify columns 1 to j of A using b
}
#pragma omp critical(J_TO_K)
{
printf("id2 = %d\n", omp_get_thread_num());
modify columns j+1 to k of A using b
}
}
Or use atomic operations to edit the matrix, if this is suitable.
#pragma omp parallel for
for (i = 0; i < n; ++i)
{
... build array bi ...
update_matrix(A, bi); // not critical
}
...
subroutine update_matrix(A, b)
{
float tmp;
printf("id0 = %d\n", omp_get_thread_num());
for (int row=0; row<max_row;row++)
for (int column=0;column<k;column++){
float(tmp)=some_function(b,row,column);
#pragma omp atomic
A[column][row]+=tmp;
}
}
By the way, data is stored in row major order in C, so you should be updating the matrix row by row rather than column by column. This will prevent false-sharing and will improve the algorithm memory-access performance.

Blitz++, armadillo and OpenMP very slow

I have been trying for very long time to learn how to parallelise and I have been reading lots of notes on OpenMP. So, I tried to used it and the results I get is that all placed where I tried to parallelise are 5 times slower than the serial case and I am wondering why...
My code is the next:
toevaluate is a blitz matrix of two columns and rows length.
storecallj and storecallk are just two blitz vectors that I used to store the calls and avoid extra function callings.
matrix is an square armadillo matrix of length columns (cols= rows) and length of row is rows (I will use it later for other thing and it is more convenient to define it as armadillo matrix)
externfunction1 is a function defined outside of this which computes the (x,y) values of f(a,b) function, where (a,b) is the input and (x,y) is the output.
Problem is a string variable and normal is a boolean.
resultk and resultj are such vectors (x,y) storing the output of such functions.
externfunction2 is another function defined outside of this which computes a double by evaluating funcci (i=1,2) which is a blitz polynomial. This polynomial gets evaluated in a double: innerprod to give another double: wvaluei (i=1,2).
I think these are all the generalities. The code is below.
{
omp_set_dynamic(0);
OMP_NUM_THREADS=4;
omp_set_num_threads(OMP_NUM_THREADS);
int chunk = int(floor(cols/OMP_NUM_THREADS));
#pragma omp parallel shared(matrix,storecallj,toevaluate,resultj,cols,rows, storecallk,atzero,innerprod,wvalue1,wvalue2,funcc1, funcc2,storediff,checking,problem,normal,chunk) private(tid,j,k)
{
tid = omp_get_thread_num();
if (tid == 0)
{
printf("Initializing parallel process...\n");
}
#pragma omp for collapse(2) schedule (dynamic, chunk) nowait
for(j=0; j<cols; ++j)
{
for(k=0; k<rows; ++k)
{
storecallj = toevaluate(j, All);
externfunction1(problem,normal,storecallj,resultj);
storecallk = toevaluate(k, All);
storediff = storecallj-storecallk;
if(k==j){
Matrix(j,k)=-atzero*sum(resultj*resultj);
}else{
innerprod =sqrt(sum(storediff* storediff));
checking=1.0-c* innerprod;
if(checking>0.0)
{
externfunction1(problem,normal, storecallk,resultk);
externfunction2(c, innerprod, funcc1, wvalue1);
externfunction2(c, innerprod, funcc2, wvalue2);
Matrix(j,k)=-wvalue2*sum(storediff*resultj)*sum(storediff*resultk)-wvalue1*sum(resultj*resultk);
}
}
}
}
}
}
and it is very slow. Another thing that I have tried is:
for(j=0; j<cols; ++j)
{
storecallj = toevaluate(j, All);
externfunction1(problem,normal,storecallj,resultj);
#pragma omp for collapse(2) schedule (dynamic, chunk) nowait
for(k=0; k<rows; ++k)
{
storecallk = toevaluate(k, All);
storediff = storecallj-storecallk;
...
I am using dynamic because I am to avoid problems in case the total number of points to evaluate is not a multiple of 4.
Could someone please give me a hand to understand why this is so slow?

shared arrays in OpenMP

I'm trying to parallelize a piece of C++ code with OpenMp but I'm facing some problems.
In fact, my parallelized code is not faster than the serial one.
I think I have understood the cause of this, but I'm not able to solve it.
The structure of my code is like this:
int vec1 [M];
int vec2 [N];
...initialization of vec1 and vec2...
for (int it=0; it < tot_iterations; it++) {
if ( (it+1)%2 != 0 ) {
#pragma omp parallel for
for (int j=0 ; j < N ; j++) {
....code involving a call to a function to which I'm passing as a parameter vec1.....
if (something) { vec2[j]=vec2[j]-1;}
}
}
else {
# pragma omp parallel for
for (int i=0 ; i < M ; i++) {
....code involving a call to a function to which I'm passing as a parameter vec2.....
if (something) { vec1[i]=vec1[i]-1;}
}
}
}
I thought that maybe my parallelized code is slower because multiple threads want to access to the same shared array and one has to wait until another has finished, but I'm not sure how things really go. But I can't make vec1 and vec2 private since the updates wouldn't be seen in the other iterations...
How can I improve it??
When you speak about issue when accessing the same array with multiple thread, this is called "false-sharing". Except if your array is small, it should not be the bottle neck here as pragma omp parallel for use static scheduling in default implementation (with gcc at least) so each thread should access most of the array without concurency except if your "...code involving a call to a function to which I'm passing as a parameter vec2....." really access a lot of elements in the array.
Case 1: You do not access most elements in the array in this part of the code
Is M big enough to make parallelism useful?
Can you move parallelism on the outer loop? (with one loop for vec1 only and the other for vec2 only)
Try to move the parallel region code :
int vec1 [M];
int vec2 [N];
...initialization of vec1 and vec2...
#pragma omp parallel
for (int it=0; it < tot_iterations; it++) {
if ( (it+1)%2 != 0 ) {
#pragma omp for
for (int j=0 ; j < N ; j++) {
....code involving a call to a function to which I'm passing as a parameter vec1.....
if (something) { vec2[j]=vec2[j]-1;}
}
}
else {
# pragma omp for
for (int i=0 ; i < M ; i++) {
....code involving a call to a function to which I'm passing as a parameter vec2.....
if (something) { vec1[i]=vec1[i]-1;}
}
}
This should not change much but some implementation have a costly parallel region creation.
case 2: You access every elements with every thread
I would say you can't do that if you perform update, otherwise, you may have concurency issue as you have order dependency in the loop.

OpenMP: cannot change the value of reduction variable

After I had read that the initial value of reduction variable is set according to the operator used for reduction, I decided that instead of remembering these default values it is better to initialize it explicitly. So I modified the code in question by Totonga as follows
const int num_steps = 100000;
double x, sum, dx = 1./num_steps;
#pragma omp parallel private(x) reduction(+:sum)
{
sum = 0.;
#pragma omp for schedule(static)
for (int i=0; i<num_steps; ++i)
{
x = (i+0.5)*dx;
sum += 4./(1.+x*x);
}
}
But it turns out that no matter whether I write sum = 0. or sum = 123.456 the code produces the same result (used gcc-4.5.2 compiler). Can somebody, please, explain me why? (with a reference to openmp standard, if possible) Thanks in advance to everybody.
P.S. since some people object initializing reduction variable, I think it makes sense to expand a question a little. The code below works as expected: I initialize reduction variable and obtain result, which DOES depend on MY initial value
int sum;
#pragma omp parallel reduction(+:sum)
{
sum = 1;
}
printf("Reduction sum = %d\n",sum);
The printed result will be the number of cores, and not 0.
P.P.S I have to update my question again. User Gilles gave an insightful comment: And upon exit of the parallel region, these local values will be reduced using the + operator, and with the initial value of the variable, prior to entering the section.
Well, the following code gives me the result 3.142592653598146, which is badly calculated pi instead of expected 103.141592653598146 (the initial code was giving me excellent value of pi=3.141592653598146)
const int num_steps = 100000;
double x, sum, dx = 1./num_steps;
sum = 100.;
#pragma omp parallel private(x) reduction(+:sum)
{
#pragma omp for schedule(static)
for (int i=0; i<num_steps; ++i)
{
x = (i+0.5)*dx;
sum += 4./(1.+x*x);
}
}
Why would you want to do that? This is just begging with all your soul for troubles. The reduction clause and the way the local variables used are initialised are defined for a reason, and the idea is that you don't need to remember these initialisation value just because they are already right.
However, in your code, the behaviour is undefined. Let's see why...
Let's assume your initial code is this:
const int num_steps = 100000;
double x, sum, dx = 1./num_steps;
sum = 0.;
for (int i=0; i<num_steps; ++i) {
x = (i+0.5)*dx;
sum += 4./(1.+x*x);
}
Well, the "normal" way of parallelising it with OpenMP would be:
const int num_steps = 100000;
double x, sum, dx = 1./num_steps;
sum = 0.;
#pragma omp parallel for reduction(+:sum) private(x)
for (int i=0; i<num_steps; ++i) {
x = (i+0.5)*dx;
sum += 4./(1.+x*x);
}
Pretty straightforward, isn't it?
Now, when instead of that, you do:
const int num_steps = 100000;
double x, sum, dx = 1./num_steps;
#pragma omp parallel private(x) reduction(+:sum)
{
sum = 0.;
#pragma omp for schedule(static)
for (int i=0; i<num_steps; ++i)
{
x = (i+0.5)*dx;
sum += 4./(1.+x*x);
}
}
You have a problem... The reason is that upon entry into the parallel region, sum hadn't been initialised. So when you declare omp parallel reduction(+:sum), you create a per-thread private version of sum, initialised to the "logical" initial value corresponding to the operator of you reduction clause, namely 0 here because you asked for a + reduction. And upon exit of the parallel region, these local values will be reduced using the + operator, and with the initial value of the variable, prior to entering the section. See this for reference:
The reduction clause specifies a reduction-identifier and one or more
list items. For each list item, a private copy is created in each
implicit task or SIMD lane, and is initialized with the initializer
value of the reduction-identifier. After the end of the region, the
original list item is updated with the values of the private copies
using the combiner associated with the reduction-identifier
So in summary, upon exit you have the equivalent of sum += sum_local_0 + sum_local_1 + ... sum_local_nbthreadsMinusOne
Therefore, since in your code, sum doesn't have any initial value, its value upon exit of the parallel region isn't defined as well, and can be whatever...
Now let's imagine you did indeed initialise it... Then, if instead of using the right initialiser inside the parallel region (like your sum=0.; in the hereinabove code), you used for whatever reason sum=1.; instead, then the final sum won't be just incremented by 1, but by 1 times the number of threads used inside the parallel region, since the extra value will be counted as many times as there are of threads.
So in conclusion, just use reduction clauses and variables the "expected"/"naïve" way, that will spare you and the people coming after for maintaining your code a lot of troubles.
Edit: It looks like my point was not clear enough, so I'll try to explain it better:
this code:
int sum;
#pragma omp parallel reduction(+:sum)
{
sum = 1;
}
printf("Reduction sum = %d\n",sum);
Has an undefined behaviour because it is equivalent to:
int sum, numthreads;
#pragma omp parallel
#pragma omp single
numthreads = omp_get_num_threads();
sum += numthreads; // value of sum is undefined since it never was initialised
printf("Reduction sum = %d\n",sum);
Now, this code is valid:
int sum = 0; //here, sum has been initialised
#pragma omp parallel reduction(+:sum)
{
sum = 1;
}
printf("Reduction sum = %d\n",sum);
To convince yourself, just read the snippet of the standard I gave:
After the end of the region, the
original list item is updated with the values of the private copies
using the combiner associated with the reduction-identifier
So the reduction uses the combination of the private reduction variables and the original value to perform the final reduction upon exit. So if the original value wasn't set, the final value is undefined as well. And that's not because for some reason your compiler gives you a value that seems right, that the code is right.
Is that clearer now?

What is the usage of reduction in openmp?

I have this piece of code that is parallelized.
int i,n; double pi,x;
double area=0.0;
#pragma omp parallel for private(x) reduction (+:area)
for(i=0; i<n; i++){
x= (i+0.5)/n;
area+= 4.0/(1.0+x*x);
}
pi = area/n;
It is said that the reduction will remove the race condition that could happen if we didn't use a reduction. Still I'm wondering do we need to add lastprivate for area since its used outside the parallel loop and will not be visible outside of it. Else does the reduction cover this as well?
Reduction takes care of making a private copy of area for each thread. Once the parallel region ends area is reduced in one atomic operation. In other words the area that is exposed is an aggregate of all private areas of each thread.
thread 1 - private area = compute(x)
thread 2 - private area = compute(y)
thread 3 - private area = compute(z)
reduction step - public area = area<thread1> + area<thread2> + area<thread3> ...
You do not need lastprivate. To help you understand how reductions are done I think it's useful to see how this can be done with atomic. The following code
float sum = 0.0f;
pragma omp parallel for reduction (+:sum)
for(int i=0; i<N; i++) {
sum += //
}
is equivalent to
float sum = 0.0f;
#pragma omp parallel
{
float sum_private = 0.0f;
#pragma omp for nowait
for(int i=0; i<N; i++) {
sum_private += //
}
#pragma omp atomic
sum += sum_private;
}
Although this alternative has more code it is helpful to show how to use more complicated operators. One limitation when suing reduction is that atomic only supports a few basic operators. If you want to use a more complicated operator (such as a SSE/AVX addition) then you can replace atomic with critical reduction with OpenMP with SSE/AVX

Resources