OpenACC: How to apply openacc pragma to "macro loops" - openacc

I defined these macros:
#define I_LOOP(g, i) _ibeg = g->lbeg[IDIR]; _iend = g->lend[IDIR]; \
for (i = _ibeg; i <= _iend; i++)
#define J_LOOP(g, j) _jbeg = g->lbeg[JDIR]; _jend = g->lend[JDIR]; \
for (j = _jbeg; i <= _jend; j++)
I have this loop I want to parallelize
#pragma acc parallel loop collapse(2)
I_LOOP(g, i){
J_LOOP(g, j){
U0[j][i] = Uc[j][i];
}}
but I get error: this kind of pragma may not be used here.
Is there a way I can parallelize this loop with the macros?

First, OpenACC loop requires the for loop to be tightly nested inside it, that is without the preceding _ibeg, _iend assignments.
Second, for this kind of #define usage, you may be able to cook something up with _Pragma (see https://gcc.gnu.org/onlinedocs/cpp/Pragmas.html, etc.):
#define I_LOOP(g, i) _ibeg = g->lbeg[IDIR]; _iend = g->lend[IDIR]; \
_Pragma("acc parallel loop private(_jbeg, _jend") \
for (i = _ibeg; i <= _iend; i++)
#define J_LOOP(g, j) _jbeg = g->lbeg[JDIR]; _jend = g->lend[JDIR]; \
_Pragma(acc loop) \
for (j = _jbeg; j <= _jend; j++)
(Untested; you didn't provide stand-alone example.)
(Notice I also fixed i <= _jend typo.)
Possibly indirection via #define DO_PRAGMA(x) _Pragma(#x) might be useful:
#define DO_PRAGMA(x) _Pragma(#x)
#define I_LOOP(g, i, pragma) _ibeg = g->lbeg[IDIR]; _iend = g->lend[IDIR]; \
DO_PRAGMA(pragma) \
for (i = _ibeg; i <= _iend; i++)
#define J_LOOP(g, j, pragma) _jbeg = g->lbeg[JDIR]; _jend = g->lend[JDIR]; \
DO_PRAGMA(pragma) \
for (j = _jbeg; j <= _jend; j++)
..., and then:
I_LOOP(g, i, "acc parallel loop private(_jbeg, _jend"){
J_LOOP(g, j, "acc loop"){
U0[j][i] = Uc[j][i];
}}
Some more code restructuring would be necessary for using the collapse clause, which again requires the for loops to be tightly nested.

Related

-ta=tesla:deepcopy flag and #pragma acc shape

I just found out about the deepcopy flag. Until this moment I've always used -ta=tesla:managed to handle deep copy and I would like to explore the alternative.
I read this article: https://www.pgroup.com/blogs/posts/deep-copy-beta.htm which is well written but I think it does not cover my case. I have a structure of this type:
typedef struct Data_{
double ****Vc;
double ****Uc;
} Data
The shape of these to array is not defined by an element of the struct itself but by the elements of another structure and that are themselves defined only during the execution of the program.
How can I use the #pragma acc shape(Vc, Uc) in this case?
Without this pragma and copying the structure as follows:
int main(){
Data data;
Initialize(&data);
}
int Initialize(Data *data){
data->Uc = ARRAY_4D(ntot[KDIR], ntot[JDIR], ntot[IDIR], NVAR, double);
data->Vc = ARRAY_4D(NVAR, ntot[KDIR], ntot[JDIR], ntot[IDIR], double);
#pragma acc enter data copyin(data)
PrimToCons3D(data->Vc, data->Uc, grid, NULL);
}
void PrimToCons3D(double ****V, double ****U, Grid *grid, RBox *box){
#pragma acc parallel loop collapse(3) present(V[:NVAR][:nx3_tot][:nx2_tot][:nx1_tot])
for (k = kbeg; k <= kend; k++){
for (j = jbeg; j <= jend; j++){
for (i = ibeg; i <= iend; i++){
double v[NVAR];
#pragma acc loop
for (nv = 0; nv < NVAR; nv++) v[nv] = V[nv][k][j][i];
}
I get
FATAL ERROR: data in PRESENT clause was not found on device 1: name=V host:0x1fd2b80
file:/home/Prova/Src/mappers3D.c PrimToCons3D line:140
Btw, this same code works fine with -ta=tesla:managed.
Since you don't provide a full reproducing example, I wasn't able to test this, but it would look something like:
typedef struct Data_{
int i,j,k,l;
double ****Vc;
double ****Uc;
#pragma acc shape(Vc[0:k][0:j][0:i][0:l])
#pragma acc shape(Uc[0:k][0:j][0:i][0:l])
} Data;
int Initialize(Data *data){
data->Vc.i = ntot[IDIR];
data->Vc.j = ntot[JDIR];
data->Vc.k = ntot[KDIR];
data->Vc.l = NVAR;
data->Uc.i = ntot[IDIR];
data->Uc.j = ntot[JDIR];
data->Uc.k = ntot[KDIR];
data->Uc.l = NVAR;
data->Uc = ARRAY_4D(ntot[KDIR], ntot[JDIR], ntot[IDIR], NVAR, double);
data->Vc = ARRAY_4D(NVAR, ntot[KDIR], ntot[JDIR], ntot[IDIR], double);
#pragma acc enter data copyin(data)
PrimToCons3D(data->Vc, data->Uc, grid, NULL);
}
void PrimToCons3D(double ****V, double ****U, Grid *grid, RBox *box){
int kbeg, jbeg, ibeg, kend, jend, iend;
#pragma acc parallel loop collapse(3) present(V, U)
for (int k = kbeg; k <= kend; k++){
for (int j = jbeg; j <= jend; j++){
for (int i = ibeg; i <= iend; i++){
Though keep in mind that the "shape" and "policy" directives we not adopted by the OpenACC standard and we (the NVHPC compiler team) only did a Beta version, which we have not maintained.
Probably better to do a manual deep copy, which will be standard compliant, which I can help with if you can provide a reproducer which includes how you're doing the array allocation, i.e. "ARRAY_4D".

Are macros (always) compatible and portable with OpenACC?

In my code I define the lower and upper bounds of different computational
regions by using a structure,
typedef struct RBox_{
int ibeg;
int iend;
int jbeg;
int jend;
int kbeg;
int kend;
} RBox;
I have then introduced the following macro,
#define BOX_LOOP(box, k,j,i) for (k = (box)->kbeg; k <= (box)->kend; k++) \
for (j = (box)->jbeg; j <= (box)->jend; j++) \
for (i = (box)->ibeg; i <= (box)->iend; i++)
(where box is a pointer to a RBox structure) to perform loops as follows:
#pragma acc parallel loop collapse(3) present(box, data)
BOX_LOOP(&box, k,j,i){
A[k][j][i] = ...
}
My question is: Is employing the macro totally equivalent to writing the
loop explicitly as below ?
ibeg = box->ibeg; iend = box->iend;
jbeg = box->jbeg; jend = box->jend;
kbeg = box->kbeg; kend = box->kend;
#pragma acc parallel loop collapse(3) present(box, data)
for (k = kbeg; k <= kend; k++){
for (j = jbeg; j <= jend; j++){
for (i = ibeg; i <= iend; i++){
A[k][j][i] = ...
}}}
Furthermore, are macros portable to different versions of the nvc compiler?
Preprocessor directives and user defined macros are part of the C99 language standard, which nvc (as well as it's predecessor "pgcc"), has supported for quite sometime (~20 years). So, yes would be portable to all versions of nvc.
The preprocessing step occurs very early in the compilation process. Only after the macros are applied, does the compiler process the OpenACC pragmas. So, yes, using the macro above would be equivalent to explicitly writing out the loops.
Since the macro is expanded by the pre-processor, which runs before the OpenACC directives are interpreted, I would expect that this will work exactly how you hope. What are you hoping to accomplish here by not writing these loops in a function rather than a macro?

OpenACC bitonic sort is much slower on GPU than on CPU

I have the following bit of code to sort double values on my GPU:
void bitonic_sort(double *data, int length) {
#pragma acc data copy(data[0:length], length)
{
int i,j,k;
for (k = 2; k <= length; k *= 2) {
for (j=k >> 1; j > 0; j = j >> 1) {
#pragma acc parallel loop gang worker vector independent
for (i = 0; i < length; i++) {
int ixj = i ^ j;
if ((ixj) > i) {
if ((i & k) == 0 && data[i] > data[ixj]) {
_ValueType buffer = data[i];
data[i] = data[ixj];
data[ixj] = buffer;
}
if ((i & k) != 0 && data[i] < data[ixj]) {
_ValueType buffer = data[i];
data[i] = data[ixj];
data[ixj] = buffer;
}
}
}
}
}
}
}
This is a bit slower on my GPU than on my CPU. I'm using GCC 6.1. I can't figure out, how to run the whole code on my GPU. So far, only the parallel loop is executed on the cpu and it switches between CPU and GPU for each one of the outer loops.
I'd like to run the whole content of the function on the GPU, but I can't figure out how. One major problem for me now is that the GCC implementation currently doesn't allow nested parallelism, so I can't use a parallel construct inside another parallel construct. Is there any way to get around that?
I've tried putting a kernels construct on top of the first loop but that slows it down by a factor of about 10. If I use a parallel construct above the first loop instead, the result isn't sorted any more, which makes sense. The two outer loops need to be executed sequentially for the algorithm to work.
If you have any other suggestions on how I could improve performance, I would be grateful as well.

parallelizing in openMP

I have the following code that I want to paralleize using OpenMP
for(m=0; m<r_c; m++)
{
for(n=0; n<c_c; n++)
{
double value = 0.0;
for(j=0; j<r_b; j++)
for(k=0; k<c_b; k++)
{
double a;
if((m-j)<0 || (n-k)<0 || (m-j)>r_a || (n-k)>c_a)
a = 0.0;
else
a = h_a[((m-j)*c_a) + (n-k)];
//printf("%lf\t", a);
value += h_b[(j*c_b) + k] * a;
}
h_c[m*c_c + n] = value;
//printf("%lf\t", h_c[m*c_c + n]);
}
//cout<<"row "<<m<<" completed"<<endl;
}
In this I want every thread to perform "for j" and "for k" simultaneouly.
I am trying to do using pragma omp parallel for before the "for m" loop but not getting the correct result.
How can I do this in an optimized manner. thanks in advance.
Depending exactly from which loop you want to parallelize, you have three options:
#pragma omp parallel
{
#pragma omp for // Option #1
for(m=0; m<r_c; m++)
{
for(n=0; n<c_c; n++)
{
double value = 0.0;
#pragma omp for // Option #2
for(j=0; j<r_b; j++)
for(k=0; k<c_b; k++)
{
double a;
if((m-j)<0 || (n-k)<0 || (m-j)>r_a || (n-k)>c_a)
a = 0.0;
else
a = h_a[((m-j)*c_a) + (n-k)];
//printf("%lf\t", a);
value += h_b[(j*c_b) + k] * a;
}
h_c[m*c_c + n] = value;
//printf("%lf\t", h_c[m*c_c + n]);
}
//cout<<"row "<<m<<" completed"<<endl;
}
}
//////////////////////////////////////////////////////////////////////////
// Option #3
for(m=0; m<r_c; m++)
{
for(n=0; n<c_c; n++)
{
#pragma omp parallel
{
double value = 0.0;
#pragma omp for
for(j=0; j<r_b; j++)
for(k=0; k<c_b; k++)
{
double a;
if((m-j)<0 || (n-k)<0 || (m-j)>r_a || (n-k)>c_a)
a = 0.0;
else
a = h_a[((m-j)*c_a) + (n-k)];
//printf("%lf\t", a);
value += h_b[(j*c_b) + k] * a;
}
h_c[m*c_c + n] = value;
//printf("%lf\t", h_c[m*c_c + n]);
}
}
//cout<<"row "<<m<<" completed"<<endl;
}
Test and profile each. You might find that option #1 is fastest if there isn't a lot of work for each thread, or you may find that with optimizations on, there is no difference (or even a slowdown) when enabling OMP.
Edit
I've adopted the MCVE supplied in the comments as follows:
#include <iostream>
#include <chrono>
#include <omp.h>
#include <algorithm>
#include <vector>
#define W_OMP
int main(int argc, char *argv[])
{
std::vector<double> h_a(9);
std::generate(h_a.begin(), h_a.end(), std::rand);
int r_b = 500;
int c_b = r_b;
std::vector<double> h_b(r_b * c_b);
std::generate(h_b.begin(), h_b.end(), std::rand);
int r_c = 500;
int c_c = r_c;
int r_a = 3, c_a = 3;
std::vector<double> h_c(r_c * c_c);
auto start = std::chrono::system_clock::now();
#ifdef W_OMP
#pragma omp parallel
{
#endif
int m,n,j,k;
#ifdef W_OMP
#pragma omp for
#endif
for(m=0; m<r_c; m++)
{
for(n=0; n<c_c; n++)
{
double value = 0.0,a;
for(j=0; j<r_b; j++)
{
for(k=0; k<c_b; k++)
{
if((m-j)<0 || (n-k)<0 || (m-j)>r_a || (n-k)>c_a)
a = 0.0;
else a = h_a[((m-j)*c_a) + (n-k)];
value += h_b[(j*c_b) + k] * a;
}
}
h_c[m*c_c + n] = value;
}
}
#ifdef W_OMP
}
#endif
auto end = std::chrono::system_clock::now();
auto elapsed = std::chrono::duration_cast<std::chrono::milliseconds>(end - start);
std::cout << elapsed.count() << "ms"
#ifdef W_OMP
"\t with OMP"
#else
"\t without OMP"
#endif
"\n";
return 0;
}
As a reference, I'm using VS2012 (OMP 2.0, grrr). I'm not sure when collapse was introduced, but apparently after 2.0. Optimizations were /O2 and compiled in Release x64.
Benchmarks
Using the original sizes of the loops (7,7,5,5) and therefore arrays, the results were 0ms without OMP and 1ms with. Verdict: optimizations were better, and the added overhead wasn't worth it. Also, the measurements are not reliable (too short).
Using the slightly larger sizes of the loops (100, 100, 100, 100) and therefore arrays, the results were about equal at about 108ms. Verdict: still not worth the naive effort, tweaking OMP parameters might tip the scale. Definitely not the x4 speedup I would hope for.
Using an even larger sizes of the loops (500, 500, 500, 500) and therefore arrays, OMP started to pull ahead. Without OMP 74.3ms, with 15s. Verdict: Worth it. Weird. I got a x5 speedup with four threads and four cores on an i5. I'm not going to try and figure out how that happened.
Summary
As has been stated in countless answers here on SO, it's not always a good idea to parallelize every for loop you come across. Things that can screw up your desired xN speedup:
Not enough work per thread to justify the overhead of creating the additional threads
The work itself is memory bound. This means that the CPU can be running at 1petaHz and you still won't see a speedup.
Memory access patterns. I'm not going to go there. Feel free to edit in the relevant info if you want it.
OMP parameters. The best choice of parameters will often be a result of this entire list (not including this item, to avoid recursion issues).
SIMD operations. Depending on what and how you're doing, the compiler may vectorize your operations. I have no idea if OMP will usurp the SIMD operations, but it is possible. Check your assembly (foreign language to me) to confirm.

Parallelizing a data dependence loop with OpenMP

I have to parallelize the following code, the data dependence is i -> i-3
for(i=3; i<N2; i++)
for(j=0; j<N3; j++)
{
D[i][j] = D[i-3][j] / 3.0 + x + E[i];
if (D[i][j] < 6.5) bat = bat + D[i][j]/100.0;
}
I tried with #pragma omp parallel for reduction(+:bat) private(i,j) shared(D,x,E) and similar things but it wasn't correct
Let's consider two threads and why parallelizing the outer loop is failing.
Thread 1: i=3, j=0. This reads D[0][0] and writes D[3][0]
Thread 2: i=6, j=0. This reads D[3][0] and writes D[6][0]
So thread 2 reads D[3][0], the same value that thread 1 is writing. That's the race condition. I think if you parallelize the inner loop you won't have a problem.
for(i=3; i<N2; i++) {
#pragma omp parallel for reduction(+:bat) private(j)
for(j=0; j<N3; j++) {
D[i][j] = D[i-3][j] / 3.0 + x + E[i];
if (D[i][j] < 6.5) bat = bat + D[i][j]/100.0;
}
}
Edit: I forgot to add the reduction and make j private. I fixed that now.

Resources