Parallelise list of independent instructions with OpenMP - parallel-processing

I have a (long) list of independent instruction that can be executed in parallel. These are not in a loop, they are simply like this:
istr1;
istr2;
...
istrN;
How can I parallelise them using OpenMP? I know I could manually split them among some Pthreads, but I was wondering if there's something more straightforward, and that can automatically adjust the number of threads to the number of CPUs, just like OpenMP does.

That's what OpenMP sections are for.
#pragma omp parallel sections
{
#pragma omp section
istr1;
#pragma omp section
istr2;
...
#pragma omp section
istrN;
}
Another option would be to use explicit tasks:
#pragma omp parallel
{
#pragma omp single
{
#pragma omp task
istr1;
#pragma omp task
istr2;
...
#pragma omp task
istrN;
}
}
The tasks are created inside a single construct to prevent their creation from happening in all threads (thus preventing each task from being created num_threads times). Using explicit tasks might result in better performance since most OpenMP runtimes utilise rather stupid logic when scheduling sections.

Related

cython openmp single, barrier

I'm trying to use openmp in cython. I need to do two things in cython:
i) use the #pragma omp single{} scope in my cython code.
ii) use the #pragma omp barrier{}
Does anyone know how to do this in cython?
Here are more details. I have a nogil cdef-function my_fun() which I call in an omp for-loop:
from cython.parallel cimport prange
cimport openmp
cdef int i
with nogil:
for i in prange(10,schedule='static', num_threads=10):
my_func(i)
Inside my_func I need to place a barrier to wait for all threads to catch up, then execute a time-consuming operation only in one of the threads and with the gil acquired, and then release the barrier so all threads resume simultaneously.
cdef int my_func(...) nogil:
...
# put a barrier until all threads catch up, e.g. #pragma omp barrier
with gil:
# execute time consuming operation in one thread only, e.g. pragma omp single{}
# remove barrier after the above single thread has finished and continue the operation over all threads in parallel, e.g. #pragma omp barrier
...
Cython has some support for openmp, but it is probably easier to code in C and to wrap resulting code with Cython if openmp-pragmas are used extensively.
As alternative, you could use verbatim-C-code and tricks with defines to bring some of the functionality to Cython, but using of pragmas in defines isn't straight forward (_Pragma is a C99-solution, MSVC doing its own thing as always with __pragma), there are some examples as proof of concept for Linux/gcc:
cdef extern from *:
"""
#define START_OMP_PARALLEL_PRAGMA() _Pragma("omp parallel") {
#define END_OMP_PRAGMA() }
#define START_OMP_SINGLE_PRAGMA() _Pragma("omp single") {
#define START_OMP_CRITICAL_PRAGMA() _Pragma("omp critical") {
"""
void START_OMP_PARALLEL_PRAGMA() nogil
void END_OMP_PRAGMA() nogil
void START_OMP_SINGLE_PRAGMA() nogil
void START_OMP_CRITICAL_PRAGMA() nogil
we make Cython believe, that START_OMP_PARALLEL_PRAGMA() and Co. are nogil-function, so it put them into C-code and thus they get pick up by the preprocessor.
We must use the syntax
#pragma omp single{
//do_something
}
and not
#pragma omp single
do_something
because of the way Cython generates C-code.
The usage could look as follows (I'm avoiding here from cython.parallel.parallel as it does too much magic for this simple example):
%%cython -c=-fopenmp --link-args=-fopenmp
cdef extern from *:# as listed above
...
def test_omp():
cdef int a=0
cdef int b=0
with nogil:
START_OMP_PARALLEL_PRAGMA()
START_OMP_SINGLE_PRAGMA()
a+=1
END_OMP_PRAGMA()
START_OMP_CRITICAL_PRAGMA()
b+=1
END_OMP_PRAGMA() # CRITICAL
END_OMP_PRAGMA() # PARALLEL
print(a,b)
Calling test_omp prints "1 2" on my machine with 2 threads, as expected (one could change the number of threads using openmp.omp_set_num_threads(10)).
However, the above is still very brittle - some error checking by Cython can lead to invalid code (Cython uses goto for control flow and it is not possible to jump out of openmp-block). Something like this happens in your example:
cimport numpy as np
import numpy as np
def test_omp2():
cdef np.int_t[:] a=np.zeros(1,dtype=int)
START_OMP_SINGLE_PRAGMA()
a[0]+=1
END_OMP_PRAGMA()
print(a)
Because of bounding checking, Cython will produce:
START_OMP_SINGLE_PRAGMA();
...
//check bounds:
if (unlikely(__pyx_t_6 != -1)) {
__Pyx_RaiseBufferIndexError(__pyx_t_6);
__PYX_ERR(0, 30, __pyx_L1_error) // HERE WE GO A GOTO!
}
...
END_OMP_PRAGMA();
In this special case setting boundcheck to false, i.e.
cimport cython
#cython.boundscheck(False)
def test_omp2():
...
would solve the issue for the above example, but probably not in general.
Once again: using openmp in C (and wrapping the functionality with Cython) is a more enjoyable experience.
As a side note: Python-threads (the ones governed by GIL) and openmp-threads are different and know nothing about eachother. The above example would also work (compile and run) correctly without releasing the GIL - openmp-threads do not care about GIL, but as there are no Python-objects involved nothing can go wrong. Thus I have added nogil to the wrapped "functions", so it can also be used in nogil blocks.
However, when code gets more complicated it becomes less obvious, that the variables shared between different Python-threads aren't accessed (all above because those accesses could happen in the generated C-code and this doesn't become clear from the Cython-code), it might be wiser not to release gil, while using openmp.

difference between omp critical and omp single

I am trying to understand the exact difference between #pragma omp critical and #pragma omp single in OpenMP:
Microsoft definitions for these are:
Single: Lets you specify that a section of code should be executed on
a single thread, not necessarily the master thread.
Critical: Specifies that code is only be executed on one thread at a
time.
So it means that in both, the exact section of code afterwards would be executed by just one thread and other threads will not enter that section e.g. if we print something, we will see the result on screen once, right?
How about the difference? It looks that critical take care of time of execution, but not single! But I don't see any difference in practice! Does it mean that a kind of waiting or synchronization for other threads (which do not enter that section) is considered in critical, but there is nothing that holds other threads in single? How it can change the outcome in practice?
I appreciate if anyone can clarify this to me especially by an example. Thanks!
single and critical are two very different things. As you mentioned:
single specifies that a section of code should be executed by single thread (not necessarily the master thread)
critical specifies that code is executed by one thread at a time
So the former will be executed only once while the later will be executed as many times as there are of threads.
For example the following code
int a=0, b=0;
#pragma omp parallel num_threads(4)
{
#pragma omp single
a++;
#pragma omp critical
b++;
}
printf("single: %d -- critical: %d\n", a, b);
will print
single: 1 -- critical: 4
I hope you see the difference better now.
For the sake of completeness, I can add that:
master is very similar to single with two differences:
master will be executed by the master only while single can be executed by whichever thread reaching the region first; and
single has an implicit barrier upon completion of the region, where all threads wait for synchronization, while master doesn't have any.
atomic is very similar to critical, but is restricted for a selection of simple operations.
I added these precisions since these two pairs of instructions are often the ones people tend to mix-up...
single and critical belong to two completely different classes of OpenMP constructs. single is a worksharing construct, alongside for and sections. Worksharing constructs are used to distribute a certain amount of work among the threads. Such constructs are "collective" in the sense that in correct OpenMP programs all threads must encounter them while executing and moreover in the same sequential order, also including the barrier constructs. The three worksharing constructs cover three different general cases:
for (a.k.a. loop construct) distributes automatically the iterations of a loop among the threads - in most cases all threads get work to do;
sections distributes a sequence of independent blocks of code among the threads - some threads get work to do. This is a generalisation of the for construct as a loop with 100 iterations could be expressed as e.g. 10 sections of loops with 10 iterations each.
single singles out a block of code for execution by one thread only, often the first one to encounter it (an implementation detail) - only one thread gets work. single is to a great extent equivalent to sections with a single section only.
A common trait of all worksharing constructs is the presence of an implicit barrier at their end, which barrier might be turned off by adding the nowait clause to the corresponding OpenMP construct, but the standard does not require such behaviour and with some OpenMP runtimes the barrier might continue to be there despite the presence of nowait. Incorrectly ordered (i.e. out of sequence in some of the threads) worksharing constructs might therefore lead to deadlocks. A correct OpenMP program will never deadlock when the barriers are present.
critical is a synchronisation construct, alongside master, atomic, and others. Synchronisation constructs are used to prevent race conditions and to bring order in the execution of things.
critical prevents race conditions by preventing the simultaneous execution of code among the threads in the so-called contention group. This means all threads from all parallel regions encountering similarly named critical constructs get serialised;
atomic turns certain simple memory operations into atomic ones, usually by utilising special assembly instructions. Atomics complete at once as a single non-breakable unit. For example, an atomic read from some location by one thread, which happens concurrently with an atomic write to the same location by another thread, will either return the old value or the updated value, but never some kind of an intermediate mash-up of bits from both the old and the new values;
master singles out a block of code for execution by the master thread (thread with ID of 0) only. Unlike single, there is no implicit barrier at the end of the construct and also there is no requirement that all threads must encounter the master construct. Also, the lack of implicit barrier means that master does not flush the shared memory view of the threads (this is an important but very poorly understood part of OpenMP). master is basically a shorthand for if (omp_get_thread_num() == 0) { ... }.
critical is a very versatile construct as it is able to serialise different pieces of code in very different parts of the program code, even in different parallel regions (significant in the case of nested parallelism only). Each critical construct has an optional name provided in parenthesis immediately after. Anonymous critical constructs share the same implementation-specific name. Once a thread enters such a construct, any other thread encountering another construct of the same name is put on hold until the original thread exits its construct. Then the serialisation process continues with the rest of the threads.
An illustration of the concepts above follows. The following code:
#pragma omp parallel num_threads(3)
{
foo();
bar();
...
}
results in something like:
thread 0: -----< foo() >< bar() >-------------->
thread 1: ---< foo() >< bar() >---------------->
thread 2: -------------< foo() >< bar() >------>
(thread 2 is purposely a latecomer)
Having the foo(); call within a single construct:
#pragma omp parallel num_threads(3)
{
#pragma omp single
foo();
bar();
...
}
results in something like:
thread 0: ------[-------|]< bar() >----->
thread 1: ---[< foo() >-|]< bar() >----->
thread 2: -------------[|]< bar() >----->
Here [ ... ] denotes the scope of the single construct and | is the implicit barrier at its end. Note how the latecomer thread 2 makes all other threads wait. Thread 1 executes the foo() call as the example OpenMP runtime chooses to assign the job to the first thread to encounter the construct.
Adding a nowait clause might remove the implicit barrier, resulting in something like:
thread 0: ------[]< bar() >----------->
thread 1: ---[< foo() >]< bar() >----->
thread 2: -------------[]< bar() >---->
Having the foo(); call within an anonymous critical construct:
#pragma omp parallel num_threads(3)
{
#pragma omp critical
foo();
bar();
...
}
results in something like:
thread 0: ------xxxxxxxx[< foo() >]< bar() >-------------->
thread 1: ---[< foo() >]< bar() >------------------------->
thread 2: -------------xxxxxxxxxxxx[< foo() >]< bar() >--->
With xxxxx... is shown the time a thread spends waiting for other threads executing a critical construct of the same name before it could enter its own construct.
Critical constructs of different names do not synchronise with each other. E.g.:
#pragma omp parallel num_threads(3)
{
if (omp_get_thread_num() > 1) {
#pragma omp critical(foo2)
foo();
}
else {
#pragma omp critical(foo01)
foo();
}
bar();
...
}
results in something like:
thread 0: ------xxxxxxxx[< foo() >]< bar() >---->
thread 1: ---[< foo() >]< bar() >--------------->
thread 2: -------------[< foo() >]< bar() >----->
Now thread 2 does not synchronise with the other threads because its critical construct is named differently and therefore makes a potentially dangerous simultaneous call into foo().
On the other hand, anonymous critical constructs (and in general constructs with the same name) synchronise with one another no matter where in the code they are:
#pragma omp parallel num_threads(3)
{
#pragma omp critical
foo();
...
#pragma omp critical
bar();
...
}
and the resulting execution timeline:
thread 0: ------xxxxxxxx[< foo() >]< ... >xxxxxxxxxxxxxxx[< bar() >]------------>
thread 1: ---[< foo() >]< ... >xxxxxxxxxxxxxxx[< bar() >]----------------------->
thread 2: -------------xxxxxxxxxxxx[< foo() >]< ... >xxxxxxxxxxxxxxx[< bar() >]->

How portable is an if clause on a parallel directive?

This is related to How to disable OMP in a translation unit at the source file?. The patch I am working on has the following due to benchmarking results. It appears we need the ability to turn off OMP on the translation unit:
static const bool CRYPTOPP_RW_USE_OMP = true;
...
ModularArithmetic modp(m_p), modq(m_q);
#pragma omp parallel sections if(CRYPTOPP_RW_USE_OMP)
{
#pragma omp section
m_pre_2_9p = modp.Exponentiate(2, (9 * m_p - 11)/8);
#pragma omp section
m_pre_2_3q = modq.Exponentiate(2, (3 * m_q - 5)/8);
#pragma omp section
m_pre_q_p = modp.Exponentiate(m_q, m_p - 2);
}
The patch also applies to a cross platform library (Linux, Unix, Solaris, BSDs, OS X and Windows), and it supports a lot of older compilers. I need to ensure that I don't break a compile.
Question: how portable is the #pragma omp parallel sections if(CRYPTOPP_RW_USE_OMP)? Will using it break compiles that used to work with just #pragma omp parallel sections?
I tried looking at past OpenMP specifications, like 2.0, but I can't see where its allowed in the grammar (see Appendix C). The closest I could find is the parallel-directive production (line 22), which leads to parallel-clause (line 24) and then unique-parallel-clause.
And looking at documentation for platforms I can't test on, its not clear to me if its available. For example, Microsoft's documentation for Visual Studio 2005 appears to only allow it on a loop.
In the very document you link, page 8, section 2.2 parallel Construct. if is among the available clauses (the first one). It is part of the standard, so portable across all conforming compilers.
In your MSDN link:
if applies to the following directives:
parallel
for (OpenMP)
sections (OpenMP)

Does the main thread surely enters an OpenMP parallel section?

I need to check continuously through a callback provided by a closed source module a terminating condition. The thread then enters a parallel section. I don't know if this callback is safe to call from other threads other the one that received it, so if I want to use it from within the parallel section, I should have only the main, "originating" thread call it. I can do that, but I need the assumption that the main thread always enters it, or the callback won't be called. Does that hold?
From the OpenMP 4.5 Complete Specifications
When any thread encounters a parallel construct, the thread creates a
team of itself and zero or more additional threads and becomes the
master of the new team.
So yes the main thread enters the oMP section.
Note that the term section may not be appropriate here, as if you have several #pragma omp section, you won't know which thread will execute which one.
What you describe fits
#pragma omp master
See What is the benefit of '#pragma omp master' as opposed to '#pragma omp single'? for more details

OpenMP parallel region without implicit synchronisation

My code has following structure
<serial-code-1>
#pragma omp parallel
{
<parallel-code>
}
<serial-code-2>
I want to remove the implicit barrier synchronization at the end of parallel region something like nowait. so that any thread that finishes first can start doing serial-code-2 ( It will require some changes in the serial code 2) but its possible. How is it possible to achieve something like this?
Perhaps
<serial-code-1>
#pragma omp parallel
{
<parallel-code>
#pragma omp single
{
<serial-code-2>
}
}
The code inside the scope of the single directive the serial code will be executed by only one thread, probably the first one to finish executing the parallel code.

Resources