Parallel for loop for addition of local matrices in OpenMP - matrix

I have n local copies of matrices,say 'local', in n threads. I want to update a global shared matrix 's' with its elements being sum of corresponding elements of all local matrices.
For eg. s[0][0] = local_1[0][0] + local_2[0][0]+...+local_n[0][0].
I wrote the following loop to achieve it -
#pragma omp parallel for
for(int i=0;i<rows;i++)
{
for(int j=0;j<cols;j++)
s[i][j]=s[i][j]+local[i][j];
}
This doesn't seem to work. Could someone kindly point out where am I going wrong?
Updated with example -
Suppose there are 3 threads, with following local matrices -
thread 1
local = 1 2
3 4
thread 2
local = 5 6
7 8
thread 3
local = 1 0
0 1
shared matrix would then be
s = 7 8
10 13

Throughout this answer I'm assuming that you have correctly created a private version of local on each thread as your question and example, but not your code snippet, indicate.
As you've written the code the variable i is private, that is each thread has it's own copy. Since it's the iteration variable for the outermost loop each thread will get it's own set of values to work on. Supposing that you have 3 threads and 3 rows then thread 0 will get i value 0, thread 1 will get 1, and so on. Obviously (or not) with more rows to iterate over each thread would get more i values to work on. In all cases each thread will get a disjoint subset of the set of all values that i takes.
However, if thread 0 gets only i==0 to work on the computation
s[i][j]=s[i][j]+local[i][j];
will only ever work on the 0-th row of local on thread 0. With the example I'm using i, on thread 0, never equals 1 so the values in the 1-th row of local on thread 0 never gets added to row 1 of s.
Between them the 3 threads will update the 3 rows of s but each will only add its own row of its own version of local.
As for how to do what you want to do, have a look at this question and the accepted answer. You are attempting an array reduction which, for reasons explained here, is not directly supported in C or C++.

This should be a comment to the last paragraph of the answer, if I was permitted to do so.
The first method in the referenced question is parallelizing the array filling but not the array reduction. According to the specs (v4 p122):
The critical construct restricts execution of the associated structured block to a
single thread at a time.
Each thread reduces its own part of the array, but only one after the other, in essence the code is run serially. The only reason for the summing loop to be inside the parallel region is that the arrays are local to each thread which makes sense only when filling them benefits from the parallelism.

Related

Does an OpenMP ordered for always assign parts of the loop to threads in order, too?

Background
I am relying on OpenMP parallelization and pseudo-random number generation in my program but at the same I would like to make the results to be perfectly replicable if desired (provided the same number of threads).
I'm seeding a thread_local PRNG for each thread separately like this,
{
std::minstd_rand master{};
#pragma omp parallel for ordered
for(int j = 0; j < omp_get_num_threads(); j++)
#pragma omp ordered
global::tl_rng.seed(master());
}
and I've come up with the following way of producing count of some elements and putting them all in an array at the end in a deterministic order (results of thread 0 first, of thread 1 next etc.)
std::vector<Element> all{};
...
#pragma omp parallel if(parallel)
{
std::vector<Element> tmp{};
tmp.reserve(count/omp_get_num_threads() + 1);
// generation loop
#pragma omp for
for(size_t j = 0; j < count; j++)
tmp.push_back(generateElement(global::tl_rng));
// collection loop
#pragma omp for ordered
for(int j = 0; j < omp_get_num_threads(); j++)
#pragma omp ordered
all.insert(all.end(),
std::make_move_iterator(tmp.begin()),
std::make_move_iterator(tmp.end()));
}
The question
This seems to work but I'm not sure if it's reliable (read: portable). Specifically, if, for example, the second thread is done with its share of the main loop early because its generateElement() calls happened to return quick, won't it technically be allowed to pick the first iteration of the collecting loop? In my compiler that does not happen and it's always thread 0 doing j = 0, thread 1 doing j = 1 etc. as intended. Does that follow from the standard or is it allowed to be compiler-specific behaviour?
I could not find much about the ordered clause in the for directive except that it is required if the loop contains an ordered directive inside. Does it always guarantee that the threads will split the loop from the start in increasing thread_num? Where does it say so in referrable sources? Or do I have to make my "generation" loop ordered as well – does it actually make difference (performance- or logic-wise) when there's no ordered directive in it?
Please don't answer by experience, or by how OpenMP would logically be implemented. I'd like to be backed by the standard.
No, the code in its current state is not portable. It will work only if the default loop schedule is static, that is, the iteration space is divided into count / #threads contiguous chunks and then assigned to the threads in the order of their thread ID with a guaranteed mapping between chunk and thread ID. But the OpenMP specification does not prescribe any default schedule and leaves it to the implementation to pick one. Many implementations use static, but that is not guaranteed to always be the case.
If you add schedule(static) to all loop constructs, then the combination of ordered clause and ordered construct within each loop body will ensure that thread 0 will receive the the first chunk of iterations and will also be the first one to execute the ordered construct. For the loops that run over the number of threads, the chunk size will be one, i.e. each thread will execute exactly one iteration and the order of the iterations of the parallel loop will match those of a sequential loop. The 1:1 mapping of iteration number to thread ID done by the static schedule will then ensure the behaviour you are aiming for.
Note that if the first loop, where you initialise the thread-local PRNGs, is in a different parallel region, you must ensure that both parallel regions execute with the same number of threads, e.g., by disabling dynamic team sizing (omp_set_dynamic(0);) or by explicit application of the num_threads clause.
As to the significance of the ordered clause + construct, it does not influence the assignment of iterations to threads, but it synchronises the threads and makes sure that the physical execution order will match the logical one. A statically scheduled loop without an ordered clause will still assign iteration 0 to thread 0, but there will be no guarantee that some other thread won't execute its loop body ahead of thread 0. Also, any code in the loop body outside of the ordered construct is still allowed to execute concurrently and out of order - see here for a more detailed explanation.

Why are my nested for loops taking so long to compute?

I have a code that generates all of the possible combinations of 4 integers between 0 and 36.
This will be 37^4 numbers = 1874161.
My code is written in MATLAB:
i=0;
for a = 0:36
for b= 0:36
for c = 0:36
for d = 0:36
i=i+1;
combination(i,:) = [a,b,c,d];
end
end
end
end
I've tested this with using the number 3 instead of the number 36 and it worked fine.
If there are 1874161 combinations, and with An overly cautions guess of 100 clock cycles to do the additions and write the values, then if I have a 2.3GHz PC, this is:
1874161 * (1/2300000000) * 100 = 0.08148526086
A fraction of a second. But It has been running for about half an hour so far.
I did receive a warning that combination changes size every loop iteration, consider predefining its size for speed, but this can't effect it that much can it?
As #horchler suggested you need to preallocate the target array
This is because your program is not O(N^4) without preallocation. Each time you add new line to array it need to be resized, so new bigger array is created (as matlab do not know how big array it will be it probably increase only by 1 item) and then old array is copied into it and lastly old array is deleted. So when you have 10 items in array and adding 11th, then a copying of 10 items is added to iteration ... if I am not mistaken that leads to something like O(N^12) which is massively more huge
estimated as (N^4)*(1+2+3+...+N^4)=((N^4)^3)/2
Also the reallocation process is increasing in size breaching CACHE barriers slowing down even more with increasing i above each CACHE size barrier.
The only solution to this without preallocation is to store the result in linked list
Not sure Matlab has this option but that will need one/two pointer per item (32/64 bit value) which renders your array 2+ times bigger.
If you need even more speed then there are ways (probably not for Matlab):
use multi-threading for array filling is fully parallelisable
use memory block copy (rep movsd) or DMA the data is periodically repeating
You can also consider to compute the value from i on the run instead of remember the whole array, depending on the usage it can be faster in some cases...

How Many cuRand States Required for Thread-Unique Random Numbers? (CUDA)

How many cuRand states are required to get unique random numbers in every thread? From other questions posted on the site, some have said that you need one per thread and others say you need one per block.
Does using one cuRand state per thread mean better random numbers?
Does using 1 cuRand state per thread slow down CUDA applications significantly (5000 + threads)?
Also for the implementation of using 1 cuRand state per thread, does this kernel look right and efficient?:
__global__ void myKernel (const double *seeds) // seeds is an array of length = #threads
int tid = ... // set tid = global thread ID
{
curandState s;
curand_init (seeds[tid],0,0,&s)
....
double r = cuRand_uniform(&s);
...
}
Assuming that all your threads stay synchronized, then you want to generate the random numbers in all threads as shown in your sample code all at the same time. However, from what I understand, you do not need to seed the cuRAND differently in each thread. I may be wrong on that one though...
Now, they use the term "block" in the documentation as in "create all your random numbers in one block". They do not mean that one block of threads will do the work, instead it means one block of memory will hold all the random numbers all generated in one single call. So if you need, say, 4096 random numbers in your loop, you should create them all at once at the start, then load them back from memory later... You'll have to test to see whether it makes things faster in your case anyway. Often, many memory accesses slow down things, but calling the generator many times is not unlikely slower as it certainly needs to reload a heavy set of values to compute the next pseudo random number(s).
Source:
http://docs.nvidia.com/cuda/curand/host-api-overview.html#performance-notes2

lock free programming - c++ atomic

I am trying to develop the following lock free code (c++11):
int val_max;
std::array<std::atomic<int>, 255> vector;
if (vector[i] > val_max) {
val_max =vector[i];
}
The problem is that when there are many threads (128 threads), the result is not correct, because if for example val_max=1, and three threads ( with vector[i] = 5, 15 and 20) will execute the code in the next line, and it will be a data race..
So, I don't know the best way to solve the problem with the available functions in c++11 (I cannot use mutex, locks), protecting the whole code.
Any suggestions? Thanks in advance!
You need to describe the bigger problem you need to solve, and why you want multiple threads for this.
If you have a lot of data and want to find the maximum, and you want to split that problem, you're doing it wrong. If all threads try to access a shared maximum, not only is it hard to get right, but by the time you have it right, you have fully serialized accesses, thus making the entire thing an exercise in adding complexity and thread overhead to a serial program.
The correct way to make it parallel is to give every thread a chunk of the array to work on (and the array members are not atomics), for which the thread calculates a local maximum, then when all threads are done have one thread find the maximum of the individual results.
Do an atomic fetch of val_max.
If the value fetched is greater than or equal to vector[i], stop, you are done.
Do an atomic compare exchange -- compare val_max to the value you read in step 1 and exchange it for the value of vector[i] if it compares.
If the compare succeeded, stop, you are done.
Go to step 1, you raced with another thread that made forward progress.

Parallelizing an algorithm with many exit points?

I'm faced with parallelizing an algorithm which in its serial implementation examines the six faces of a cube of array locations within a much larger three dimensional array. (That is, select an array element, and then define a cube or cuboid around that element 'n' elements distant in x, y, and z, bounded by the bounds of the array.
Each work unit looks something like this (Fortran pseudocode; the serial algorithm is in Fortran):
do n1=nlo,nhi
do o1=olo,ohi
if (somecondition(n1,o1) .eq. .TRUE.) then
retval =.TRUE.
RETURN
endif
end do
end do
Or C pseudocode:
for (n1=nlo,n1<=nhi,n++) {
for (o1=olo,o1<=ohi,o++) {
if(somecondition(n1,o1)!=0) {
return (bool)true;
}
}
}
There are six work units like this in the total algorithm, where the 'lo' and 'hi' values generally range between 10 and 300.
What I think would be best would be to schedule six or more threads of execution, round-robin if there aren't that many CPU cores, ideally with the loops executing in parallel, with the goal the same as the serial algorithm: somecondition() becomes True, execution among all the threads must immediately stop and a value of True set in a shared location.
What techniques exist in a Windows compiler to facilitate parallelizing tasks like this? Obviously, I need a master thread which waits on a semaphore or the completion of the worker threads, so there is a need for nesting and signaling, but my experience with OpenMP is introductory at this point.
Are there message passing mechanisms in OpenMP?
EDIT: If the highest difference between "nlo" and "nhi" or "olo" and "ohi" is eight to ten, that would imply no more than 64 to 100 iterations for this nested loop, and no more than 384 to 600 iterations for the six work units together. Based on that, is it worth parallelizing at all?
Would it be better to parallelize the loop over the array elements and leave this algorithm serial, with multiple threads running the algorithm on different array elements? I'm thinking this from your comment "The time consumption comes from the fact that every element in the array must be tested like this. The arrays commonly have between four million and twenty million elements." The design of implementing the parallelelization of the array elements is also flexible in terms of the number threads. Unless there is a reason that the array elements have to be checked in some order?
It seems that the portion that you are showing us doesn't take that long to execute so making it take less clock time by making it parallel might not be easy ... there is always some overhead to multiple threads, and if there is not much time to gain, parallel code might not be faster.
One possibility is to use OpenMP to parallelize over the 6 loops -- declare logical :: array(6), allow each loop to run to completion, and then retval = any(array). Then you can check this value and return outside the parallelized loop. Add a schedule(dynamic) to the parallel do statement if you do this. Or, have a separate !$omp parallel and then put !$omp do schedule(dynamic) ... !$omp end do nowait around each of the 6 loops.
Or, you can follow the good advice by #M.S.B. and parallelize the outermost loop over the whole array. The problem here is that you cannot have a RETURN inside a parallel loop -- so label the second outermost loop (the largest one within the parallel part), and EXIT that loop -- smth like
retval = .FALSE.
!$omp parallel do default(private) shared(BIGARRAY,retval) schedule(dynamic,1)
do k=1,NN
if(.not. retval) then
outer2: do j=1,NN
do i=1,NN
! --- your loop #1
do n1=nlo,nhi
do o1=olo,ohi
if (somecondition(BIGARRAY(i,j,k),n1,o1)) then
retval =.TRUE.
exit outer2
endif
end do
end do
! --- your loops #2 ... #6 go here
end do
end do outer2
end if
end do
!$omp end parallel do
[edit: the if statement is there presuming that you need to find out if there is at least one element like that in the big array. If you need to figure the condition for every element, you can similarly either add a dummy loop exit or goto, skipping the rest of the processing for that element. Again, use schedule(dynamic) or schedule(guided).]
As a separate point, you might also want to check if it may be a good idea to go through the innermost loop by some larger step (depending on float size), compute a vector of logicals on each iteration and then aggregate the results, eg. smth like if(count(somecondition(x(o1:o1+step,n1,k)))>0); in this case the compiler may be able to vectorize somecondition.
I believe you can do what you want with the task construct introduced in OpenMP 3; Intel Fortran supports tasking in OpenMP. I don't use tasks often so I won't offer you any wonky pseudocode.
You already mentioned the obvious way to stop all threads as soon as any thread finds the ending condition: have each check some shared variable which gives the status of the ending condition, thereby determining whether to break out of the loops. Obviously this is an overhead, so if you decide to take this approach I would suggest a few things:
Use atomics to check the ending condition, this avoids expensive memory flushing as just the variable in question is flushed. Move to OpenMP 3.1, there are some new atomic operations supported.
Check infrequently, maybe like once per outer iteration. You should only be parallelizing large cases to overcome the overhead of multithreading.
This one is optional, but you can try adding compiler hints, e.g. if you expect a certain condition to be false most of the time, the compiler will optimize the code accordingly.
Another (somewhat dirty) approach is to use shared variables for the loop ranges for each thread, maybe use a shared array where index n is for thread n. When one thread finds the ending condition, it changes the loop ranges of all the other threads so that they stop. You'll need the appropriate memory synchronization. Basically the overhead has now moved from checking a dummy variable to synchronizing/checking loop conditions. Again probably not so good to do this frequently, so maybe use shared outer loop variables and private inner loop variables.
On another note, this reminds me of the classic polling versus interrupt problem. Unfortunately I don't think OpenMP supports interrupts where you can send some kind of kill signal to each thread.
There are hacking work-arounds like using a child process for just this parallel work and invoking the operating system scheduler to emulate interrupts, however this is rather tricky to get correct and would make your code extremely unportable.
Update in response to comment:
Try something like this:
char shared_var = 0;
#pragma omp parallel
{
//you should have some method for setting loop ranges for each thread
for (n1=nlo; n1<=nhi; n1++) {
for (o1=olo; o1<=ohi; o1++) {
if (somecondition(n1,o1)!=0) {
#pragma omp atomic write
shared_var = 1; //done marker, this will also trigger the other break below
break; //could instead use goto to break out of both loops in 1 go
}
}
#pragma omp atomic read
private_var = shared_var;
if (private_var!=0) break;
}
}
A suitable parallel approach might be, to let each worker examine a part of the overall problem, exactly as in the serial case and use a local (non-shared) variable for the result (retval). Finally do a reduction over all workers on these local variables into a shared overall result.

Resources