I'm attempting to benchmark the speedup for OpenMP aware code. I'm using the Crypto++ library, and the Rabin-Williams signature class. The class implements Bernstein's Tweaked Roots, and has the following code:
ModularArithmetic modp(m_p), modq(m_q);
#pragma omp parallel sections
{
#pragma omp section
m_pre_2_9p = modp.Exponentiate(2, (9 * m_p - 11)/8);
#pragma omp section
m_pre_2_3q = modq.Exponentiate(2, (3 * m_q - 5)/8);
#pragma omp section
m_pre_q_p = modp.Exponentiate(m_q, m_p - 2);
}
From Crypto++'s perspective, all I need to do is something like the following:
RWSS<P1363_EMSA2, SHA256>::Signer signer(...);
signer.Precompute();
// Ready to sign
After I perform Precompute(), then Crypto++ is ready to go. I can sign away.
I also understand OpenMP has to startup, and it has things like dynamic teams. I tried to reference previous benchmarking papers, like Performance Evaluation of OpenMP Benchmarks on Intel's Quad Core Processors, but they don't call out what they did. I also grepped sources like EPCC OpenMP micro-benchmark suite, but it does not call omp_set_dynamic to remove the associated overhead.
What steps should I perform to get OpenMP into a clean room-like state so that I'm actually measuring the speedup of the big integer/signing operations, and not spending time in OpenMP startup or shutdown code; or spending time growing or shrinking the team? What do I do for OpenMP?
Related
I'm having trouble in efficiently parallelizing the next line of code:
# pragma omp for nowait
for (int i = 0; i < M; i++) {
# pragma omp atomic
centroids[points[i].cluster].points_in_cluster++;
}
This runs, I guess due to the omp for overhead, slower than this:
# pragma omp single nowait
for (int i = 0; i < M; i++) {
centroids[points[i].cluster].points_in_cluster++;
}
Is there any way to make this go faster?
Theory
While atomics are certainly better than locks or critical regions due to their implementation in hardware on most platforms, they are still in general to be avoided if possible as they do not scale well, i.e. increasing the number of threads will create more atomic collisions and therefore more overhead. Further hardware-/implementation-specific bottlenecks due to atomics are described in the comments below the question and this answer by #PeterCordes.
The alternative to atomics is a parallel reduction algorithm. Assuming that there are much more points than centroids, one can use OpenMP's reduction clause to let every thread have a private version of centroids. These private histograms will be consolidated in an implementation-defined fashion after filling them.
There is no guarantee that this technique is faster than using atomics in every possible case. It could not only depend on the size of the two index spaces, but also on the data as it determines the number of collisions when using atomics. A proper parallel reduction algorithm is in general still expected to scale better to big numbers of threads.
Practice
The problem with using reduce in your code is the Array-of-Structs (AoS) data layout. Specifying
# pragma omp for reduction(+: centroids[0:num_centroids])
will produce an error at build time, as the compiler does not know how to reduce the user-defined type of centroids. Specifying
# pragma omp for reduction(+: centroids[0:num_centroids].points_in_cluster)
does not work either as it is not a valid OpenMP array section.
One can try to use an custom reduction here, but I do not know how to combine a user-defined reduction with OpenMP array sections (see the edit at the end). Also it could be very inefficient to create all the unused variables in the centroid struct on every thread.
With a Struct-of-Array (SoA) data layout you would just have a plain integer buffer, e.g. int *points_in_clusters, which could then be used in the following way (assuming that there are num_centroids elements in centroids and now points_in_clusters):
# pragma omp for nowait reduction(+: points_in_clusters[0:num_centroids])
for (int i = 0; i < M; i++) {
points_in_clusters[points[i].cluster]++;
}
If you cannot just change the data layout, you could still use some scratch space for the OpenMP reduction and afterwards copy the results back to the centroid structs in another loop. But this additional copy operation could eat into the savings from using reduction in the first place.
Using SoA also has benefits for (auto-) vectorization (of other loops) and potentially improves cache locality for regular access patterns. AoS on the other hand can be better for cache locality when encountering random access patterns (e.g. most sorting algorithms if the comparison makes use of multiple variables from the struct).
PS: Be careful with nowait. Does the following work really not depend on the resulting points_in_cluster?
EDIT: I removed my alternative implementation using a user-defined reduction operator as it was not working. I seem to have fixed the problem, but I do not have enough confidence in this implementation (performance- and correctness-wise) to add it back into the answer. Feel free to improve upon the linked code and post another answer.
I have a C++ class, several of whose functions have OpenMP parallel for loops. I'm building it into two apps with MSVC2017, and find that one of those functions runs differently in the 2 apps. The function has two separate parallel for loops. In one build, the VS debugger shows them both using 7 cores for a solid second while processing a block of test data; in the other, it shows just two blips of multicore usage, presumably at the beginning of each parallel section, but only 1 processor runs most of the time.
These functions are deep inside the code for the class, which is identical in the 2 apps. The builds have the same compiler and linker options so far as I can see. I generate the projects with CMake and never modify them by hand.
Can anyone suggest possible reasons for this behavior? I am fully aware of other ways to parallelize code, so please don't tell me about those. I am just looking for expertise on OpenMP under MSVC.
I expect he two calls are passing in significantly different amounts of work. Consider (example, trivial, typed into this post, not compiled, not the way to write this!) code like
void scale(int n, double *d, double f) {
#pragma omp parallel for
for (int i=0; i<n; i++)
d[i] = d[i] * f;
}
If invoked with a large vector where n == 10000, you'll get some parallelism and many threads working. If called with n == 3 there's obviously only work for three threads!
If you use #pragma omp parallel for schedule(dynamic) it's quite possible that even with ten or twenty iterations a single thread will execute most of them.
In summary: context matters.
I'm studying OpenMP now, and I have a question. The work time of the following code and the same code without a parallel section is statistically equal, though all threads are accessing the function. I tried to look at some guides in the internet, but it did not help. So the question is, what is wrong with this parallel section?
int sumArrayParallel( )
{
int i = 0;
int sum = 0;
#pragma omp parallel for
for (i = 0; i < arraySize; ++i)
{
cout << omp_get_thread_num() << " ";
sum += testArray[i];
}
return sum;
}
There are two very common causes of OpenMP codes failing to exhibit improved performance over their serial counterparts:
The work being done is not sufficient to outweigh the overhead of parallel computation. Think of there being a cost, in time, for setting up a team of threads, for distributing work to them, for gathering results from them. Unless this cost is less than the time saved by parallelising the computation an OpenMP code, even if correct, will not show any speed up and may show the opposite. You haven't shown us the numbers so do the calculations on this yourself.
The programmer imposes serial operation on the parallel program, perhaps by wrapping data access inside memory fences, perhaps by accessing platform resources which are inherently serial. I suspect (but my knowledge of C is lousy) that your writing to cout may inadvertently serialise that part of your computation.
Of course, you can have a mixture of these two problems, too much serialisation and not enough work, resulting in disappointing performance.
For further reading this page on Intel's website is useful, and not just for beginners.
I think, though, that you have a more serious problem with your code than its poor parallel performance. Does the OpenMP version produce the correct sum ? Since you have made no specific provision sum is shared by all threads and they will race for access to it. While learning OpenMP it is a very good idea to attach the clause default(none) to your parallel regions and to take responsibility for defining the shared/private status of each variable in each region. Then, once you are fluent in OpenMP you will know why it makes sense to continue to use the default(none) clause.
Even if you reply Yes, the code does produce the correct result the data race exists and your program can't be trusted. Data races are funny like that, they don't show up in all the tests you run then, once you roll-out your code into production, bang ! and egg all over your face.
However, you seem to be rolling your own reduction and OpenMP provides the tools for doing this. Investigate the reduction clause in your OpenMP references. If I read your code correctly, and taking into account the advice above, you could rewrite the loop to
#pragma omp parallel for default(none) shared(sum, arraySize, testArray) private(i) reduction(+:sum)
for (i = 0; i < arraySize; ++i)
{
sum += testArray[i];
}
In a nutshell, using the reduction clause tells OpenMP to sort out the problems of summing a single value from work distributed across threads, avoiding race conditions etc.
Since OpenMP makes loop iteration variables private by default you could omit the clause private(i) from the directive without too much risk. Even better though might be to declare it inside the for statement:
#pragma omp parallel for default(none) shared(sum, arraySize, testArray) reduction(+:sum)
for (int i = 0; i < arraySize; ++i)
variables declared inside parallel regions are (leaving aside some special cases) always private.
I am trying to parallelize the Guibas Stolfi delaunay triangulation using openmp.
There are two things to parallelize here-
the mergesort(),which i did and
the divide() where I am stuck.
I have tried all possible approaches but in vain.
The approach followed(divide n conquer) in divide() is same as that of mergesort(),but applying the same parallelization technique(omp sections) works only for mergesort.
I tried the parallelization technique shown here,but even that doesn't work.
I read about nested parallelism somewhere but i am not sure how to implement it.
Can anybody explain how divide and conquer algorithms are parallelized?
CODE:Called mergesort twice in main function and applied sections construct.Doing same for divide function doesn't work
#pragma omp parallel
{
#pragma omp sections nowait
{
#pragma omp section
{
merge_sort(p_sorted, p_temp, 0, n/2);
}
#pragma omp section
{
merge_sort(p_sorted, p_temp, (n/2)+1, n-1);
}
}
}
I was successful in parallelizing using the CreateThread calls in Windows, the trick is to divide the points into 2^n buffers, process each buffer in a separate thread and then merge adjacent edges, successively, until one final merge.
I have a demonstation program to create random data and triangulate and display the results (for smaller cases). It doesn't look like this site lets me download the .zip I have of the program and display tool. If you can suggest an upload site or provide an email I'll send it to you.
I have the following program.
nv is around 100, dgemm is 20x100 or so, so there is plenty of work to go around:
#pragma omp parallel for schedule(dynamic,1)
for (int c = 0; c < int(nv); ++c) {
omp::thread thread;
matrix &t3_c = vv_.at(omp::num_threads()+thread);
if (terms.first) {
blas::gemm(1, t2_, vvvo_, 1, t3_c);
blas::gemm(1, vvvo_, t2_, 1, t3_c);
}
matrix &t3_b = vv_[thread];
if (terms.second) {
matrix &t2_ci = vo_[thread];
blas::gemm(-1, t2_ci, Vjk_, 1, t3_c);
blas::gemm(-1, t2_ci, Vkj_, 0, t3_b);
}
}
however with GCC 4.4, GOMP v1, the gomp_barrier_wait_end accounts for nearly 50% of runtime. Changing GOMP_SPINCOUNT aleviates the overhead but then only 60% of cores are used. Same for OMP_WAIT_POLICY=passive. The system is Linux, 8 cores.
How can i get full utilization without spinning/waiting overhread
The barrier is a symptom, not the problem. The reason that there's lots of waiting at the end of the loop is that some of the threads are done well before the others, and they all wait at the end of the for loop for quite a while until everyone's done.
This is a classic load imbalance problem, which is weird here, since it's just a bunch of matrix multiplies. Are they of varying sizes? How are they laid out in memory, in terms of NUMA stuff - are they all currently sitting in one core's cache, or are there other sharing issues? Or, more simply -- are there only 9 matricies, so that the remaining 8 are doomed to be stuck waiting for whoever got the last one?
When this sort of thing happens in a larger parallel block of code, sometime it's ok to proceed to the next block of code while some of the loop iterations aren't done yet; there you can add the nowait directive to the for which will override the default behaviour and get rid of the implied barrier. Here, though, since the parallel block is exactly the size of the for loop, that can't really help.
Could it be that your BLAS implementation also calls OpenMP inside? Unless you only see one call to gomp_barrier_wait_end.