Cannot understand how libgomp implements the FOR construct - openmp

According to the libgomp manual, a code in the form:
#pragma omp parallel for
for (i = lb; i <= ub; i++)
body;
becomes
void subfunction (void *data)
{
long _s0, _e0;
while (GOMP_loop_static_next (&_s0, &_e0))
{
long _e1 = _e0, i;
for (i = _s0; i < _e1; i++)
body;
}
GOMP_loop_end_nowait ();
}
GOMP_parallel_loop_static (subfunction, NULL, 0, lb, ub+1, 1, 0);
subfunction (NULL);
GOMP_parallel_end ();
I did a very tiny program to debug just to see how this implementation works:
int main(int argc, char** argv)
{
int res, i;
# pragma omp parallel for num_threads(4)
for(i = 0; i < 400000; i++)
res = res*argc;
return 0;
}
Next, I ran gdb and set breakpoints to "GOMP_parallel_loop_static" and "GOMP_parallel_end". At the beginning, the library was not loaded, so they were pending. By the time a ran the test program inside gdb, I got the result below:
(gdb) run 2 1 6 5 4 3 8 7
Starting program: ./test 2 1 6 5 4 3 8 7
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-
gnu/libthread_db.so.1".
[New Thread 0x7ffff73c9700 (LWP 5381)]
[New Thread 0x7ffff6bc8700 (LWP 5382)]
[New Thread 0x7ffff63c7700 (LWP 5383)]
Thread 1 "test" hit Breakpoint 2, 0x00007ffff7bc0c00 in GOMP_parallel_end () from /usr/lib/x86_64-linux-gnu/libgomp.so.1
As you can see, It reached the second breakpoint, in "GOMP_parallel_end" but not the first. I would like to know how could this be possible if the libgomp manual shows clearly that "GOMP_parallel_loop_static" comes first.
Thank you.

That part of GCC's documentation has not really been updated regularly, so it's probably a good idea to only read it as an approximation of what is actually happening. If you're interested in that level of detail, I suggest you look at the debug files generated by -fdump-tree-all and similar options.
With a recent version of GCC, your example generates a call to __builtin_GOMP_parallel, which maps to GOMP_parallel. That one internally calls GOMP_parallel_end at the end, so that's what you're seeing, I suppose.
void
GOMP_parallel (void (*fn) (void *), void *data, unsigned num_threads, unsigned int flags)
{
num_threads = gomp_resolve_num_threads (num_threads, 0);
gomp_team_start (fn, data, num_threads, flags, gomp_new_team (num_threads));
fn (data);
ialias_call (GOMP_parallel_end) ();
}
Of course, patches to update the documentation will be gladly accepted. :-)

Related

How to have the same routine executed sometimes by the CPU and sometimes by the GPU with OpenACC?

I'm dealing with a routine which I want the first time to be executed by the CPU and every other time by the GPU. This routine contains the loop:
for (k = kb; k <= ke; k++){
for (j = jb; j <= je; j++){
for (i = ib; i <= ie; i++){
...
}}}
I tried with adding #pragma acc loop collapse(3) to the loop and #pragma acc routine(routine) vector just before the calls where I want the GPU to execute the routine. -Minfo=accel doesn't report any message and with Nsight-System I see that the routine is always executed by the CPU so in this way it doesn't work.
Why the compiler is reading neither of the two #pragma?
To follow on to Thomas' answer, here's an example of using the "if" clause:
% cat test.c
#include <stdlib.h>
#include <stdio.h>
void compute(int * Arr, int size, int use_gpu) {
#pragma acc parallel loop copyout(Arr[:size]) if(use_gpu)
for (int i=0; i < size; ++i) {
Arr[i] = i;
}
}
int main() {
int *Arr;
int size;
int use_gpu;
size=1024;
Arr = (int*) malloc(sizeof(int)*size);
// Run on the host
use_gpu=0;
compute(Arr,size,use_gpu);
// Run on the GPU
use_gpu=1;
compute(Arr,size,use_gpu);
free(Arr);
}
% nvc -acc -Minfo=accel test.c
compute:
4, Generating copyout(Arr[:size]) [if not already present]
Generating NVIDIA GPU code
7, #pragma acc loop gang, vector(128) /* blockIdx.x threadIdx.x */
% setenv NV_ACC_TIME 1
% a.out
Accelerator Kernel Timing data
test.c
compute NVIDIA devicenum=0
time(us): 48
4: compute region reached 1 time
4: kernel launched 1 time
grid: [8] block: [128]
device time(us): total=5 max=5 min=5 avg=5
elapsed time(us): total=331 max=331 min=331 avg=331
4: data region reached 2 times
9: data copyout transfers: 1
device time(us): total=43 max=43 min=43 avg=43
I'm using nvc and set the compiler's runtime profiler (NV_ACC_TIME=1) to show that the kernel is launched only once.
You need to enable OpenACC processing: -acc (with NVHPC tools) or -fopenacc (with GCC), for example, and then you need to use an OpenACC compute construct (parallel, kernels) to actually launch parallel GPU execution (plus host/device memory management, as necessary). For example, you could call your routine from that compute construct, and the routine would annotate the loop nest with OpenACC loop directives, as you've mentioned, to actually make use of the GPU parallelism.
Then, to answer your actual question: the OpenACC compute constructs then support an if clause to specify whether the region will execute on the current device ("GPU") vs. the local thread will execute the region ("CPU").

OpenMP pragma translation to runtime calls

I wrote a short program in C with OpenMP pragma, and I need to know to which libGOMP function a pragma is translated by GCC.
Here is my marvelous code:
#include <stdio.h>
#include "omp.h"
int main(int argc, char** argv)
{
int k = 0;
#pragma omp parallel private(k) num_threads(4)
{
k = omp_get_thread_num();
printf("Hello World from %d !\n", k);
}
return 0;
}
In order to generate intermediate language from GCC v8.2.0, I compiled this program with the following command:
gcc -fopenmp -o hello.exe hello.c -fdump-tree-ompexp
And the result is given by:
;; Function main (main, funcdef_no=0, decl_uid=2694, cgraph_uid=0, symbol_order=0)
OMP region tree
bb 2: gimple_omp_parallel
bb 3: GIMPLE_OMP_RETURN
Added new low gimple function main._omp_fn.0 to callgraph
Introduced new external node (omp_get_thread_num/2).
Introduced new external node (printf/3).
;; Function main._omp_fn.0 (main._omp_fn.0, funcdef_no=1, decl_uid=2700, cgraph_uid=1, symbol_order=1)
main._omp_fn.0 (void * .omp_data_i)
{
int k;
<bb 6> :
<bb 3> :
k = omp_get_thread_num ();
printf ("Hello World from %d !\n", k);
return;
}
;; Function main (main, funcdef_no=0, decl_uid=2694, cgraph_uid=0, symbol_order=0)
Merging blocks 2 and 7
Merging blocks 2 and 4
main (int argc, char * * argv)
{
int k;
int D.2698;
<bb 2> :
k = 0;
__builtin_GOMP_parallel (main._omp_fn.0, 0B, 4, 0);
D.2698 = 0;
<bb 3> :
<L0>:
return D.2698;
}
The function call to "__builtin_GOMP_parallel" is what it interest me. So, I looked at the source code of the libGOMP from GCC.
However, the only function calls I found was (from parallel.c file):
GOMP_parallel_start (void (*fn) (void *), void *data, unsigned num_threads)
GOMP_parallel_end (void)
So, I can imiagine that, in a certain manner, the call to "__builtin_GOMP_parallel" is transformed to GOMP_parallel_start and GOMP_parallel_end.
How can I be sure of this assumption ? How can I found the translation from the builtin function to the two other ones I found in the source code ?
Thank you
You almost got it. __builtin_GOMP_parallel is just a compiler alias to GOMP_parallel (defined in omp-builtins.def) which is translated very late in compilation, you can see the actual call in the assembly with gcc -S.
GOMP_parallel is similar to
GOMP_parallel_start(...);
fn(...);
GOMP_parallel_end();

Incorrect simultaneous operation of OpenMP and MPIR in VS 2015

Guys. I'm trying to speed up the loop using OpenMP.
If I speed up a loop that uses an integer variable, then everything works correctly:
void main()
{
//mpz_class i("0");
//mpz_class k("1");
//mpz_class l("1211728594799");
int k = 9;
int i = 0;
int l = 1998899087;
#pragma omp parallel for
for (i=k; i <= l; i++) {
if (i == 1998899085)
printf("kkk");
}
system("pause");
}
If I start using the MPIR variable in a loop, then I get errors when building the program in Visual Studio 2015. Here are the numbers of these errors: C3015, C3017,C3019. Here is the code that causes these errors:
void main()
{
mpz_class i("0");
mpz_class k("1");
mpz_class l("1211728594799");
//int k = 9;
//int i = 0;
//int l = 1998899087;
#pragma omp parallel for
for (i=k; i <= l; i++) {
if (i == 1998899085)
printf("kkk");
}
system("pause");
}
MPIR itself works correctly if I disable pragma omp parallel for then the code is going to be fine, but it works much slower than using int variable of the same range of numbers.
What should I do to make Open MP work correctly with MPIR and I could speed up my program by running it in parallel?

OpenCL in Xcode/OSX - Can't assign zero in kernel loop

I'm developing an accelerated component in OpenCL, using Xcode 4.5.1 and Grand Central Dispatch, guided by this tutorial.
The full kernel kept failing on the GPU, giving signal SIGABRT. I couldn't make much progress interpreting the error beyond that.
But I broke out aspects of the kernel to test, and I found something very peculiar involving assigning certain values to positions in an array within a loop.
Test scenario: give each thread a fixed range of array indices to initialize.
kernel void zero(size_t num_buckets, size_t positions_per_bucket, global int* array) {
size_t bucket_index = get_global_id(0);
if (bucket_index >= num_buckets) return;
for (size_t i = 0; i < positions_per_bucket; i++)
array[bucket_index * positions_per_bucket + i] = 0;
}
The above kernel fails. However, when I assign 1 instead of 0, the kernel succeeds (and my host code prints out the array of 1's). Based on a handful of tests on various integer values, I've only had problems with 0 and -1.
I've tried to outsmart the compiler with 1-1, (int) 0, etc, with no success. Passing zero in as a kernel argument worked though.
The assignment to zero does work outside of the context of a for loop:
array[bucket_index * positions_per_bucket] = 0;
The findings above were confirmed on two machines with different configurations. (OSX 10.7 + GeForce, OSX 10.8 + Radeon.) Furthermore, the kernel had no trouble when running on CL_DEVICE_TYPE_CPU -- it's just on the GPU.
Clearly, something ridiculous is happening, and it must be on my end, because "zero" can't be broken. Hopefully it's something simple. Thank you for your help.
Host code:
#include <stdio.h>
#include <OpenCL/OpenCL.h>
#include "zero.cl.h"
int main(int argc, const char* argv[]) {
dispatch_queue_t queue = gcl_create_dispatch_queue(CL_DEVICE_TYPE_GPU, NULL);
size_t num_buckets = 64;
size_t positions_per_bucket = 4;
cl_int* h_array = malloc(sizeof(cl_int) * num_buckets * positions_per_bucket);
cl_int* d_array = gcl_malloc(sizeof(cl_int) * num_buckets * positions_per_bucket, NULL, CL_MEM_WRITE_ONLY);
dispatch_sync(queue, ^{
cl_ndrange range = { 1, { 0 }, { num_buckets }, { 0 } };
zero_kernel(&range, num_buckets, positions_per_bucket, d_array);
gcl_memcpy(h_array, d_array, sizeof(cl_int) * num_buckets * positions_per_bucket);
});
for (size_t i = 0; i < num_buckets * positions_per_bucket; i++)
printf("%d ", h_array[i]);
printf("\n");
}
Refer to the OpenCL standard, section 6, paragraph 8 "Restrictions", bullet point k (emphasis mine):
6.8 k. Arguments to kernel functions in a program cannot be declared with the built-in scalar types bool, half, size_t, ptrdiff_t, intptr_t, and uintptr_t. [...]
The fact that your compiler even let you build the kernel at all indicates it is somewhat broken.
So you might want to fix that... but if that doesn't fix it, then it looks like a compiler bug, plain and simple (of CLC, that is, the OpenCL compiler, not your host code). There is no reason this kernel should work with any constant other than 0, -1. Did you try updating your OpenCL driver, what about trying on a different operating system (though I suppose this code is OS X only)?

openMP is not creating threads in visual studio

My openMP version did not give any speed boost. I have a dual core machine and the CPU usage is always 50%. So I tried the sample program given in Wiki. Looks like the openMP compiler (Visual Studio 2008) is not creating more than one thread.
This is the program:
#include <omp.h>
#include <stdio.h>
#include <stdlib.h>
int main (int argc, char *argv[]) {
int th_id, nthreads;
#pragma omp parallel private(th_id)
{
th_id = omp_get_thread_num();
printf("Hello World from thread %d\n", th_id);
#pragma omp barrier
if ( th_id == 0 ) {
nthreads = omp_get_num_threads();
printf("There are %d threads\n",nthreads);
}
}
return EXIT_SUCCESS;
}
This is the output that I get:
Hello World from thread 0
There are 1 threads
Press any key to continue . . .
There's nothing wrong with the program - so presumably there's some issue with how it's being compiled or run. Is this VS2008 Pro? A quick google around suggests OpenMP is not enabled in Standard. Is OpenMP enabled in Properties -> C/C++ -> Language -> OpenMP? (Eg, are you compiling with /openmp)? Is the environment variable OMP_NUM_THREADS being set to 1 somewhere when you run this?
If you want to test out your program with more than one thread, there are several constructs for specifying the number of threads in an OpenMP parallel region. They are, in order of precedence:
Evaluation of the if clause
Setting of the num_threads clause
Use of the omp_set_num_threads() library function
Setting of the OMP_NUM_THREADS environment variable
Implementation default
It sounds like your implementation is defaulting to one thread (assuming you don't have OMP_NUM_THREADS=1 set in your environment).
To test with 4 threads, for instance, you could add num_threads(4) to your #pragma omp parallel directive.
As the other answer noted, you won't really see any "speedup" because you aren't exploiting any parallelism. But it is reasonable to want to run a "hello world" program with several threads to test it out.
As mentioned here, http://docs.oracle.com/cd/E19422-01/819-3694/5_compiling.html I got it working by setting the environment variable OMP_DYNAMIC to FALSE
Why would you need more than one thread for that program? It's clearly the case that OpenMP realizes that it doesn't need to create an extra thread to run a program with no loops, no code that could run in parallel whatsoever.
Try running some parallel stuff with OpenMP. Something like this:
#include <omp.h>
#include <stdio.h>
#include <stdlib.h>
#define CHUNKSIZE 10
#define N 100
int main (int argc, char *argv[])
{
int nthreads, tid, i, chunk;
float a[N], b[N], c[N];
/* Some initializations */
for (i=0; i < N; i++)
a[i] = b[i] = i * 1.0;
chunk = CHUNKSIZE;
#pragma omp parallel shared(a,b,c,nthreads,chunk) private(i,tid)
{
tid = omp_get_thread_num();
if (tid == 0)
{
nthreads = omp_get_num_threads();
printf("Number of threads = %d\n", nthreads);
}
printf("Thread %d starting...\n",tid);
#pragma omp for schedule(dynamic,chunk)
for (i=0; i<N; i++)
{
c[i] = a[i] + b[i];
printf("Thread %d: c[%d]= %f\n",tid,i,c[i]);
}
} /* end of parallel section */
}
If you want some hard core stuff, try running one of these.

Resources