Cannot get speed for simple OpenMP parallel for loop - openmp

This is my first try on OpenMP, but cannot get speedup on it. The machine is Linux amd_64.
I coded the following code:
printf ("nt = %d\n", nt);
omp_set_num_threads(nt);
int i, j, s;
#pragma omp parallel for private(j,s)
for (i=0; i<10000; i++)
{
for (j=0; j<100000; j++)
{
s++;
}
}
And the compile with
g++ tempomp.cpp -o tomp -lgomp
And run it with different nthreads, no speedup:
nt = 1
elapsed time =2.670000
nt = 2
elapsed time =2.670000
nt = 12
elapsed time =2.670000
Any ideas?

I think you need to add the flag -fopenmp to your compiler:
g++ tempomp.cpp -o tomp -lgomp -fopenmp
When -fopenmp is used, the compiler will generate parallel code
based on the OpenMP directives encountered.
-lgomp loads libraries of the Gnu OpenMP Project.
How many cores do your machine have?

Related

Are there any downsides to -fPIC (or -fpic)? Can I just _always_ include it in my code? [duplicate]

Question
I am testing a simple code which calculates Mandelbrot fractal. I have been checking its performance depending on the number of iterations in the function that checks if a point belongs to the Mandelbrot set or not.
The surprising thing is that I am getting a big difference in times after adding the -fPIC flag. From what I read the overhead is usually negligible and the highest overhead I came across was about 6%. I measured around 30% overhead. Any advice will be appreciated!
Details of my project
I use the -O3 flag, gcc 4.7.2, Ubuntu 12.04.2, x86_64.
The results look as follow
#iter C (fPIC) C C/C(fPIC)
1 0.01 0.01 1.00
100 0.04 0.03 0.75
200 0.06 0.04 0.67
500 0.15 0.1 0.67
1000 0.28 0.19 0.68
2000 0.56 0.37 0.66
4000 1.11 0.72 0.65
8000 2.21 1.47 0.67
16000 4.42 2.88 0.65
32000 8.8 5.77 0.66
64000 17.6 11.53 0.66
Commands I use:
gcc -O3 -fPIC fractalMain.c fractal.c -o ffpic
gcc -O3 fractalMain.c fractal.c -o f
Code: fractalMain.c
#include <time.h>
#include <stdio.h>
#include <stdbool.h>
#include "fractal.h"
int main()
{
int iterNumber[] = {1, 100, 200, 500, 1000, 2000, 4000, 8000, 16000, 32000, 64000};
int it;
for(it = 0; it < 11; ++it)
{
clock_t start = clock();
fractal(iterNumber[it]);
clock_t end = clock();
double millis = (end - start)*1000 / CLOCKS_PER_SEC/(double)1000;
printf("Iter: %d, time: %lf \n", iterNumber[it], millis);
}
return 0;
}
Code: fractal.h
#ifndef FRACTAL_H
#define FRACTAL_H
void fractal(int iter);
#endif
Code: fractal.c
#include <stdio.h>
#include <stdbool.h>
#include "fractal.h"
void multiplyComplex(double a_re, double a_im, double b_re, double b_im, double* res_re, double* res_im)
{
*res_re = a_re*b_re - a_im*b_im;
*res_im = a_re*b_im + a_im*b_re;
}
void sqComplex(double a_re, double a_im, double* res_re, double* res_im)
{
multiplyComplex(a_re, a_im, a_re, a_im, res_re, res_im);
}
bool isInSet(double P_re, double P_im, double C_re, double C_im, int iter)
{
double zPrev_re = P_re;
double zPrev_im = P_im;
double zNext_re = 0;
double zNext_im = 0;
double* p_zNext_re = &zNext_re;
double* p_zNext_im = &zNext_im;
int i;
for(i = 1; i <= iter; ++i)
{
sqComplex(zPrev_re, zPrev_im, p_zNext_re, p_zNext_im);
zNext_re = zNext_re + C_re;
zNext_im = zNext_im + C_im;
if(zNext_re*zNext_re+zNext_im*zNext_im > 4)
{
return false;
}
zPrev_re = zNext_re;
zPrev_im = zNext_im;
}
return true;
}
bool isMandelbrot(double P_re, double P_im, int iter)
{
return isInSet(0, 0, P_re, P_im, iter);
}
void fractal(int iter)
{
int noIterations = iter;
double xMin = -1.8;
double xMax = 1.6;
double yMin = -1.3;
double yMax = 0.8;
int xDim = 512;
int yDim = 384;
double P_re, P_im;
int nop;
int x, y;
for(x = 0; x < xDim; ++x)
for(y = 0; y < yDim; ++y)
{
P_re = (double)x*(xMax-xMin)/(double)xDim+xMin;
P_im = (double)y*(yMax-yMin)/(double)yDim+yMin;
if(isMandelbrot(P_re, P_im, noIterations))
nop = x+y;
}
printf("%d", nop);
}
Story behind the comparison
It might look a bit artificial to add the -fPIC flag when building executable (as per one of the comments). So a few words of explanation: first I only compiled the program as executable and wanted to compare to my Lua code, which calls the isMandelbrot function from C. So I created a shared object to call it from lua - and had big time differences. But couldn't understand why they were growing with number of iterations. In the end found out that it was because of the -fPIC. When I create a little c program which calls my lua script (so effectively I do the same thing, only don't need the .so) - the times are very similar to C (without -fPIC). So I have checked it in a few configurations over the last few days and it consistently shows two sets of very similar results: faster without -fPIC and slower with it.
It turns out that when you compile without the -fPIC option multiplyComplex, sqComplex, isInSet and isMandelbrot are inlined automatically by the compiler. If you define those functions as static you will likely get the same performance when compiling with -fPIC because the compiler will be free to perform inlining.
The reason why the compiler is unable to automatically inline the helper functions has to do with symbol interposition. Position independent code is required to access all global data indirectly, i.e. through the global offset table. The very same constraint applies to function calls, which have to go through the procedure linkage table. Since a symbol might get interposed by another one at runtime (see LD_PRELOAD), the compiler cannot simply assume that it is safe to inline a function with global visibility.
The very same assumption can be made if you compile without -fPIC, i.e. the compiler can safely assume that a global symbol defined in the executable cannot be interposed because the lookup scope begins with the executable itself which is then followed by all other libraries, including the preloaded ones.
For a more thorough understanding have a look at the following paper.
As other people already pointed out -fPIC forces GCC to disable many optimizations e.g. inlining and cloning. I'd like to point out several ways to overcome this:
replace -fPIC with -fPIE if you are compiling main executable (not libraries) as this allows compiler to assume that interposition is not possible;
use -fvisibility=hidden and __attribute__((visibility("default"))) to export only necessary functions from the library and hide the rest; this would allow GCC to optimize hidden functions more aggressively;
use private symbol aliases (__attribute__((alias ("__f")));) to refer to library functions from within the library; this would again untie GCC's hands
previous suggestion can be automated with -fno-semantic-interposition flag that was added in recent GCC versions
It's interesting to note that Clang is different from GCC as it allows all optimizations by default regardless of -fPIC (can be overridden with -fsemantic-interposition to obtain GCC-like behavior).
As others have discussed in the comment section of your opening post, compiling with -flto should help to reduce the difference in run-times you are seeing for this particular case, since the link time optimisations of gcc will likely figure out that it's actually ok to inline a couple of functions ;)
In general, link time optimisations could lead to massive reductions in code size (~6%) link to paper on link time optimisations in gold, and thus run time as well (more of your program fits in the cache). Also note that -fPIC is mostly viewed as a feature that enables tighter security and is always enabled in android. This question on SO briefly discusses as well. Also, just to let you know, -fpic is the faster version of -fPIC, so if you must use -fPIC try -fpic instead - link to gcc docs. For x86 it might not make a difference, but you need to check this for yourself/ask on gcc-help.

OpenMP takes more time in my program with gcc 5.4 [duplicate]

This question already has answers here:
C++: Timing in Linux (using clock()) is out of sync (due to OpenMP?)
(3 answers)
Closed 4 years ago.
I'm using Ubuntu 16.04 as Windows subsystem and gcc 5.4 version with bash.
My windows version is windows 10 home, and ram is 16GB.
My cpu is i7-7700HQ.
I am studying computer programming at my university. These days, I am interested in parallel programming, so I've coded many codes, but there are some problems.
int i;
int sum = 0;
clock_t S, E;
S = clock();
#pragma omp parallel for reduction(+:sum)
for(i = 0 ; i < M ; i++){
sum++;
}
E = clock();
printf("\n%d :: time : %lf\n", sum, (double)(E-S)/1000);
return 0;
If I compile this code with commamd, "gcc -openmp test.c -o test" and "time ./test", it shows
100000000 :: time : 203.125000
real 0m0.311s
user 0m0.203s
sys 0m0.016s.
However,
int i;
int sum = 0;
clock_t S, E;
S = clock();
for(i = 0 ; i < M ; i++){
sum++;
}
E = clock();
printf("\n%d :: time : %lf\n", sum, (double)(E-S)/1000);
return 0;
if is compile this code with command, "gcc -openmp test2.c -o test2" and "time ./test2", it shows
100000000 :: time : 171.875000
real 0m0.295s
user 0m0.172s
sys 0m0.016s.
If I compile those codes again and again, it sometimes takes same time, but openmp is never faster.
And I edited those codes with vim.
And I tried to compile with command, "gcc -fopenmp test.c" and "time ./penmp". It takes much more time than command, "gcc -openmp test.c."
And if I compile those same codes with visual studio 2017 community, openmp is much more faster.
Please let me know how I can reduce time with openmp.
You should use omp_get_wtime() instead to measure the wall-clock:
double dif;
double start = omp_get_wtime( ); //start the timer
//beginning of computation
..
//end of computation
double end = omp_get_wtime();// end the timer
dif = end - start // stores the difference in dif
printf("the time of dif is %f", dif);

c - Linking a PGI OpenACC-enabled library with gcc

Briefly speaking, my question relies in between compiling/building files (using libraries) with two different compilers while exploiting OpenACC constructs in source files.
I have a C source file that has an OpenACC construct. It has only a simple function that computes total sum of an array:
#include <stdio.h>
#include <stdlib.h>
#include <openacc.h>
double calculate_sum(int n, double *a) {
double sum = 0;
int i;
printf("Num devices: %d\n", acc_get_num_devices(acc_device_nvidia));
#pragma acc parallel copyin(a[0:n])
#pragma acc loop
for(i=0;i<n;i++) {
sum += a[i];
}
return sum;
}
I can easily compile it using following line:
pgcc -acc -ta=nvidia -c libmyacc.c
Then, create a static library by following line:
ar -cvq libmyacc.a libmyacc.o
To use my library, I wrote a piece of code as following:
#include <stdio.h>
#include <stdlib.h>
#define N 1000
extern double calculate_sum(int n, double *a);
int main() {
printf("Hello --- Start of the main.\n");
double *a = (double*) malloc(sizeof(double) * N);
int i;
for(i=0;i<N;i++) {
a[i] = (i+1) * 1.0;
}
double sum = 0.0;
for(i=0;i<N;i++) {
sum += a[i];
}
printf("Sum: %.3f\n", sum);
double sum2 = -1;
sum2 = calculate_sum(N, a);
printf("Sum2: %.3f\n", sum2);
return 0;
}
Now, I can use this static library with PGI compiler itself to compile above source (f1.c):
pgcc -acc -ta=nvidia f1.c libmyacc.a
And it will execute flawlessly. However, it differs for gcc. My question relies in here. How can I built it properly with gcc?
Thanks to Jeff's comment on this question:
linking pgi compiled library with gcc linker, now I can build my source file (f1.c) without errors, but the executable file emits some fatal errors.
This is what I use to compile my source file with gcc (f1.c):
gcc f1.c -L/opt/pgi/linux86-64/16.5/lib -L/usr/lib64 -L/usr/lib/gcc/x86_64-redhat-linux/4.8.5 -L. -laccapi -laccg -laccn -laccg2 -ldl -lcudadevice -lpgmp -lnuma -lpthread -lnspgc -lpgc -lm -lgcc -lc -lgcc -lmyacc
This is the error:
Num devices: 2
Accelerator Fatal Error: No CUDA device code available
Thanks to -v option when compiling f1.c with PGI compiler, I see that the compiler invokes so many other tools from PGI and NVidia (like pgacclnk and nvlink).
My questions:
Am I on the wrong path? Can I call functions in PGI compiled libraries from GCC and use OpenACC within those functions?
If answer to above is positive, can I use still link without steps (calling pgacclnk and nvlink) that PGI takes?
If answer to above is positive too, what should I do?
Add "-ta=tesla:nordc" to your pgcc compilation. By default PGI uses runtime dynamic compilation (RDC) for the GPU code. However RDC requires an extra link step (with nvlink) that gcc does not support. The "nordc" sub-option disables RDC so you'll be able to use OpenACC code in a library. However by disabling RDC you can no longer call external device routines from a compute region.
% pgcc -acc -ta=tesla:nordc -c libmyacc.c
% ar -cvq libmyacc.a libmyacc.o
a - libmyacc.o
% gcc f1.c -L/proj/pgi/linux86-64/16.5/lib -L/usr/lib64 -L/usr/lib/gcc/x86_64-redhat-linux/4.8.5 -L. -laccapi -laccg -laccn -laccg2 -ldl -lcudadevice -lpgmp -lnuma -lpthread -lnspgc -lpgc -lm -lgcc -lc -lgcc -lmyacc
% a.out
Hello --- Start of the main.
Sum: 500500.000
Num devices: 8
Sum2: 500500.000
Hope this helps,
Mat

Knowing what SIMD instructions OpenMP 4.0 will produce?

Short of checking the actual assembly produced, is there any way to determine what platform-specific instructions will be utilised by OpenMP, for a given use case?
For example, I've identified pcmpeqq i.e. 64-bit integer word equality (SSE 4.1) as the desirable instruction rather than pcmpeqd i.e. 32-bit word equality (SSE 2). Is there any way to know that OpenMP 4.0 will produce the former and not the latter? (spec does not address such specifics.)
The only way to ever guarantee that any compiler will ever emit a particular assembly instruction is to hardcode it. There's no spec in the world that constrains the compiler to generate specific instructions for a given language feature.
Having said that, if support for SSE4.1 or better is specified implicitly or explicitly on the command line, it would greatly surprise me if many compilers emitted SSE2 instructions in situations where the later instructions would work.
Checking the assembly isn't difficult:
$ cat foo.c
#include <stdio.h>
int main(int argc, char **argv) {
const int n=128;
long x[n];
long y[n];
for (int i=0; i<n/2; i++) {
x[i] = y[i] = 1;
x[i+n/2] = 2;
y[i+n/2] = 2;
}
#pragma omp simd
for (int i=0; i<n; i++)
x[i] = (x[i] == y[i]);
for (int i=0; i<n; i++)
printf("%d: %ld\n", i, x[i]);
return 0;
}
$ icc -openmp -msse4.1 -o foo41.s foo.c -S -std=c99 -qopt-report-phase=vec -qopt-report=2
icc: remark #10397: optimization reports are generated in *.optrpt files in the output location
$ icc -openmp -msse2 -o foo2.s foo.c -S -std=c99 -qopt-report-phase=vec -qopt-report=2 -o foo2.s
icc: remark #10397: optimization reports are generated in *.optrpt files in the output location
And sure enough:
$ grep pcmp foo41.s
pcmpeqq (%rax,%rsi,8), %xmm0 #18.25
$ grep pcmp foo2.s
pcmpeqd (%rax,%rsi,8), %xmm2 #18.25

Compiling using GSL and OpenMP

I am not the best when it comes to compiling/writing makefiles.
I am trying to write a program that uses both GSL and OpenMP.
I have no problem using GSL and OpenMP separately, but I'm having issues using both. For instance, I can compile the GSL program
http://www.gnu.org/software/gsl/manual/html_node/An-Example-Program.html
By typing
$gcc -c Bessel.c
$gcc Bessel.o -lgsl -lgslcblas -lm
$./a.out
and it works.
I was also able to compile the program that uses OpenMP that I found here:
Starting a thread for each inner loop in OpenMP
In this case I typed
$gcc -fopenmp test_omp.c
$./a.out
And I got what I wanted (all 4 threads I have were used).
However, when I simply write a program that combines the two codes
#include <stdio.h>
#include <gsl/gsl_sf_bessel.h>
#include <omp.h>
int
main (void)
{
double x = 5.0;
double y = gsl_sf_bessel_J0 (x);
printf ("J0(%g) = %.18e\n", x, y);
int dimension = 4;
int i = 0;
int j = 0;
#pragma omp parallel private(i, j)
for (i =0; i < dimension; i++)
for (j = 0; j < dimension; j++)
printf("i=%d, jjj=%d, thread = %d\n", i, j, omp_get_thread_num());
return 0;
}
Then I try to compile to typing
$gcc -c Bessel_omp_test.c
$gcc Bessel_omp_test.o -fopenmp -lgsl -lgslcblas -lm
$./a.out
The GSL part works (The Bessel function is computed), but only one thread is used for the OpenMP part. I'm not sure what's wrong here...
You missed the worksharing directive for in your OpenMP part. It should be:
// Just in case GSL modifies the number of threads
omp_set_num_threads(omp_get_max_threads());
omp_set_dynamic(0);
#pragma omp parallel for private(i, j)
for (i =0; i < dimension; i++)
for (j = 0; j < dimension; j++)
printf("i=%d, jjj=%d, thread = %d\n", i, j, omp_get_thread_num());
Edit: To summarise the discussion in the comments below, the OP failed to supply -fopenmp during the compilation phase. That prevented GCC from recognising the OpenMP directives and thus no paralle code was generated.
IMHO, it's incorrect to declare the variables i and j as shared. Try declaring them private. Otherwise, each thread would get the same j and j++ would generate a race condition among threads.

Resources