OpenMP, Simple for loop comparing values in array + counter - parallel-processing

So, I have this piece of code and I don't really now why it doesnt work properly.
#pragma omp parallel for
for (i = 0; i < N; i++)
{
if (visible[i] > visible[i-1]) counter++;
}
I have an array that is ascendingly ordered and i want to count value changes. So from: 0.5 0.5 0.66 0.66 0.66 0.7 ... counter=2.
However I if I simply use just this code the counter doesnt return right value and it also slower than withou pragma directive. (40ms without pragma, 45ms - 50ms with pragma, 32 000 000 iterations). I tried adding
#pragma omp atomic
counter++;
which returns right value, but the time is like 90ms. Any suggestions?

Related

how do we calculate the number of reads/misses of the cache in this code snippet?

I'm trying to get an understanding of how to calculate the errors in the code, from the link on this page, Example given from text book. I can see where the calculations come from, but as the values are the same (32), I cannot work out how to do the calculation should the value in the two loops differ. Using different sized loops, what would the calculations be please?
`
for (i = 32; i >= 0; i--) {
for (j = 128; j >= 0; j--) {
total_x += grid[i][j].x;
}
}
for (i = 128; i >= 0; i--) {
for (j = 32; j >= 0; j--) {
total_y += grid[i][j].y;
}
}
`
If we had a matrix with 128 rows and 24 columns (instead of the 32 x 32 in the example), using 32-bit integers, and with each memory block able to hold 16 bytes, how do we calculate the number of compulsory misses on the top loop?
Also, if we use a direct-mapped cache holding 256 bytes of data, how would we calculate the number of all the data cache misses when running the top loop?
Finally, if we flip it and use the bottom loop, how does the maths change (if it does) for the points above?
Apologies as this is all new to me and I just want to understand the maths behind it so I can answer the problem, rather than just be given an answer.
Nothing - it's a theoretical question

Blitz++, armadillo and OpenMP very slow

I have been trying for very long time to learn how to parallelise and I have been reading lots of notes on OpenMP. So, I tried to used it and the results I get is that all placed where I tried to parallelise are 5 times slower than the serial case and I am wondering why...
My code is the next:
toevaluate is a blitz matrix of two columns and rows length.
storecallj and storecallk are just two blitz vectors that I used to store the calls and avoid extra function callings.
matrix is an square armadillo matrix of length columns (cols= rows) and length of row is rows (I will use it later for other thing and it is more convenient to define it as armadillo matrix)
externfunction1 is a function defined outside of this which computes the (x,y) values of f(a,b) function, where (a,b) is the input and (x,y) is the output.
Problem is a string variable and normal is a boolean.
resultk and resultj are such vectors (x,y) storing the output of such functions.
externfunction2 is another function defined outside of this which computes a double by evaluating funcci (i=1,2) which is a blitz polynomial. This polynomial gets evaluated in a double: innerprod to give another double: wvaluei (i=1,2).
I think these are all the generalities. The code is below.
{
omp_set_dynamic(0);
OMP_NUM_THREADS=4;
omp_set_num_threads(OMP_NUM_THREADS);
int chunk = int(floor(cols/OMP_NUM_THREADS));
#pragma omp parallel shared(matrix,storecallj,toevaluate,resultj,cols,rows, storecallk,atzero,innerprod,wvalue1,wvalue2,funcc1, funcc2,storediff,checking,problem,normal,chunk) private(tid,j,k)
{
tid = omp_get_thread_num();
if (tid == 0)
{
printf("Initializing parallel process...\n");
}
#pragma omp for collapse(2) schedule (dynamic, chunk) nowait
for(j=0; j<cols; ++j)
{
for(k=0; k<rows; ++k)
{
storecallj = toevaluate(j, All);
externfunction1(problem,normal,storecallj,resultj);
storecallk = toevaluate(k, All);
storediff = storecallj-storecallk;
if(k==j){
Matrix(j,k)=-atzero*sum(resultj*resultj);
}else{
innerprod =sqrt(sum(storediff* storediff));
checking=1.0-c* innerprod;
if(checking>0.0)
{
externfunction1(problem,normal, storecallk,resultk);
externfunction2(c, innerprod, funcc1, wvalue1);
externfunction2(c, innerprod, funcc2, wvalue2);
Matrix(j,k)=-wvalue2*sum(storediff*resultj)*sum(storediff*resultk)-wvalue1*sum(resultj*resultk);
}
}
}
}
}
}
and it is very slow. Another thing that I have tried is:
for(j=0; j<cols; ++j)
{
storecallj = toevaluate(j, All);
externfunction1(problem,normal,storecallj,resultj);
#pragma omp for collapse(2) schedule (dynamic, chunk) nowait
for(k=0; k<rows; ++k)
{
storecallk = toevaluate(k, All);
storediff = storecallj-storecallk;
...
I am using dynamic because I am to avoid problems in case the total number of points to evaluate is not a multiple of 4.
Could someone please give me a hand to understand why this is so slow?

Unexpected slowdown using omp

I'm using OMP to try to get some speedup in a small kernel. It's basically just querying a vector of unordered_sets for membership. I tried to make an optimization, but surprisingly I got a slowdown, and am really curious why.
My first pass was:
vector<unordered_set<uint16_t> > setList = getData();
#pragma omp parallel for default(shared) private(i, j) schedule(dynamic, 50)
for(i = 0; i < size; i++){
for(j = 0; j < 500; j++){
count = count + setList[i].count(val[j]);
}
}
Then I thought I could maybe get a speedup by moving the setList[i] sub expression up one level of nesting and save it in a temp variable, by doing the following:
#pragma omp parallel for default(shared) private(i, j, currSet) schedule(dynamic, 50)
for(i = 0; i < size; i++){
currSet = setList[i];
for(j = 0; j < 500; j++){
count = count + currSet.count(val[j]);
}
}
I had thought this would maybe save a load each iteration of the "j" for loop, and get a speedup, but it actually SLOWED DOWN by about 3x. By this I mean the entire kernel took about 3 times as long to run. Thoughts on why this would occur?
Thanks!
Adding up a few integers is really not enough work to warrant starting threads for.
If you forget to add the reduction clause, you'll suffer from true sharing - all threads want to update that count variable at the same time. This makes all cores fight for the cache line containing tha variable, which will considerably impact your performance.
I just noticed that you set the schedule to be dynamic. You shouldn't. This workload can be divided at compile time already. So don't specify a schedule.
As has already been stated inter-loop dependencies, i.e. threads waiting for data from other threads, or data being accessed by multiple threads successively, can cause a paralleled program to experience slow down and should be avoided as a rule of thumb. Built in functions like reductions can collect individual results and compile them together in an optimised fashion.
Here is a good example of reduction being used in a similar case to yours from the university of Utah
int array[8] = { 1, 1, 1, 1, 1, 1, 1, 1};
int sum = 0, i;
#pragma omp parallel for reduction(+:sum)
for (i = 0; i < 8; i++) {
sum += array[i];
}
printf("total %d\n", sum);
source: http://www.eng.utah.edu/~cs4960-01/lecture9.pdf
as an aside: private variables need only be assigned when they are local variables inside a parallel region In both cases it is not necessary for i to be declared private.
see wikipedia: https://en.wikipedia.org/wiki/OpenMP#Data_sharing_attribute_clauses
Data sharing attribute clauses
shared: the data within a parallel region is shared, which means visible and accessible by all threads simultaneously. By default, all variables in the work sharing region are shared except the loop iteration counter.
private: the data within a parallel region is private to each thread, which means each thread will have a local copy and use it as a temporary variable. A private variable is not initialized and the value is not maintained for use outside the parallel region. By default, the loop iteration counters in the OpenMP loop constructs are private.
see stack exchange answer here: OpenMP: are local variables automatically private?

OpenMP program is slower than sequential one

When I try the following code
double start = omp_get_wtime();
long i;
#pragma omp parallel for
for (i = 0; i <= 1000000000; i++) {
double x = rand();
}
double end = omp_get_wtime();
printf("%f\n", end - start);
Execution time is about 168 seconds, while the sequential version only spends 20 seconds.
I'm still a newbie in parallel programming. How could I get a parallel version that's faster that the sequential one?
The random number generator rand(3) uses global state variables (hidden in the (g)libc implementation). Access to them from multiple threads leads to cache issues and also is not thread safe. You should use the rand_r(3) call with seed parameter private to the thread:
long i;
unsigned seed;
#pragma omp parallel private(seed)
{
// Initialise the random number generator with different seed in each thread
// The following constants are chosen arbitrarily... use something more sensible
seed = 25234 + 17*omp_get_thread_num();
#pragma omp for
for (i = 0; i <= 1000000000; i++) {
double x = rand_r(&seed);
}
}
Note that this will produce different stream of random numbers when executed in parallel than when executed in serial. I would also recommend erand48(3) as a better (pseudo-)random number source.

OpenMP in Ubuntu: parallel program works on double core processor in two times slower than single-threaded. Why?

I get the code from wikipedia:
#include <stdio.h>
#include <omp.h>
#define N 100
int main(int argc, char *argv[])
{
float a[N], b[N], c[N];
int i;
omp_set_dynamic(0);
omp_set_num_threads(10);
for (i = 0; i < N; i++)
{
a[i] = i * 1.0;
b[i] = i * 2.0;
}
#pragma omp parallel shared(a, b, c) private(i)
{
#pragma omp for
for (i = 0; i < N; i++)
c[i] = a[i] + b[i];
}
printf ("%f\n", c[10]);
return 0;
}
I tryed to compile and run it in my Ubuntu 11.04 with gcc4.5 (my configuration: Intel C2D T7500M 2.2GHz, 2048Mb RAM) and this program worked in two times slower than single-threaded. Why?
Very simple answer: Increase N. And set the number of threads equal to the number processors you have.
For your machine, 100 is a very low number. Try some orders of magnitudes higher.
Another question is: How are you measuring the computation time? Usually one takes the program time to get comparable results.
I suppose the compiler optimized the for loop in the non-smp case (using SSE instructions, e.g.) and it can't in the OMP variant.
Use gcc -S (or objdump -S) to view the assembly for the different variants.
You might want to watch out with the shared variables anyway, because they need to be synchronized, making things very slow. If you can 'smart' chunks (look at the schedule pragma) you might reduce the contention, but again:
verify the emitted code
profile
don't underestimate the efficiency of singlethreaded code (because of cache locality and lack of context switches)
set the number of threads to the number of CPUs (let openMP decide it for you!); unless your thread-team has a master thread with dedicated tasks, in which case there might be value in allocating ONE extra thread
In all the cases where I tried to apply OMP for parallelization, roughly 70% of the cases are slower. The cases where it is a definite speedup is with
coarse-grained parallellism (your sample is on the fine-grained end of the spectrum)
no shared data
The issue you are facing is false memory sharing. Each thread should have its own private c[i].
Try this: #pragma omp parallel shared(a, b) private(i, c)
Run the code below and see the difference.
1.) OpenMP has an overhead so the runtime has to be more than the overhead to see a benefit.
2.) Don't set the number of threads yourself. In general I use the default threads. However, if your processor has hyper-threading you might get a bit better performance by setting the number of threads equal to the number of cores. With hyper threading the default number of threads will be twice the number of cores. For example on my machine I have four cores and the default number of threads is eight. By setting it to four in some situations I get better results and in other cases I get worse results.
3.) There is some false sharing in c but as long as N is large enough (which it needs to be to overcome the overhead) the false sharing will not cause much of a problem. You can play with the chunk size but I don't think it will be helpful.
4.) Cache issues. You have at least four levels of memory (the values are for my system): L1 (32Kb), L2(256Kb), L3(12Mb), and main memory (>>12Mb). The benefits of parallelism are going to diminish as you move into higher level. However, in the example below I set N to 100 million floats which is 400 million bytes or about 381Mb and it is still significantly faster using multiple threads. Try adjusting N and see what happens. For example try setting N to your cache levels/4 (one float is 4 bytes) (arrays a and b also need to be in the cache so you might need to set N to the cache level/12). However, if N is too small you fight with the OpenMP overhead (which is what the code in your question does).
#include <stdio.h>
#include <omp.h>
#define N 100000000
int main(int argc, char *argv[]) {
float *a = new float[N];
float *b = new float[N];
float *c = new float[N];
int i;
for (i = 0; i < N; i++) {
a[i] = i * 1.0;
b[i] = i * 2.0;
}
double dtime;
dtime = omp_get_wtime();
for (i = 0; i < N; i++) {
c[i] = a[i] + b[i];
}
dtime = omp_get_wtime() - dtime;
printf ("time %f, %f\n", dtime, c[10]);
dtime = omp_get_wtime();
#pragma omp parallel for private(i)
for (i = 0; i < N; i++) {
c[i] = a[i] + b[i];
}
dtime = omp_get_wtime() - dtime;
printf ("time %f, %f\n", dtime, c[10]);
return 0;
}

Resources