Does the main thread surely enters an OpenMP parallel section? - openmp

I need to check continuously through a callback provided by a closed source module a terminating condition. The thread then enters a parallel section. I don't know if this callback is safe to call from other threads other the one that received it, so if I want to use it from within the parallel section, I should have only the main, "originating" thread call it. I can do that, but I need the assumption that the main thread always enters it, or the callback won't be called. Does that hold?

From the OpenMP 4.5 Complete Specifications
When any thread encounters a parallel construct, the thread creates a
team of itself and zero or more additional threads and becomes the
master of the new team.
So yes the main thread enters the oMP section.
Note that the term section may not be appropriate here, as if you have several #pragma omp section, you won't know which thread will execute which one.

What you describe fits
#pragma omp master
See What is the benefit of '#pragma omp master' as opposed to '#pragma omp single'? for more details

Related

difference between omp critical and omp single

I am trying to understand the exact difference between #pragma omp critical and #pragma omp single in OpenMP:
Microsoft definitions for these are:
Single: Lets you specify that a section of code should be executed on
a single thread, not necessarily the master thread.
Critical: Specifies that code is only be executed on one thread at a
time.
So it means that in both, the exact section of code afterwards would be executed by just one thread and other threads will not enter that section e.g. if we print something, we will see the result on screen once, right?
How about the difference? It looks that critical take care of time of execution, but not single! But I don't see any difference in practice! Does it mean that a kind of waiting or synchronization for other threads (which do not enter that section) is considered in critical, but there is nothing that holds other threads in single? How it can change the outcome in practice?
I appreciate if anyone can clarify this to me especially by an example. Thanks!
single and critical are two very different things. As you mentioned:
single specifies that a section of code should be executed by single thread (not necessarily the master thread)
critical specifies that code is executed by one thread at a time
So the former will be executed only once while the later will be executed as many times as there are of threads.
For example the following code
int a=0, b=0;
#pragma omp parallel num_threads(4)
{
#pragma omp single
a++;
#pragma omp critical
b++;
}
printf("single: %d -- critical: %d\n", a, b);
will print
single: 1 -- critical: 4
I hope you see the difference better now.
For the sake of completeness, I can add that:
master is very similar to single with two differences:
master will be executed by the master only while single can be executed by whichever thread reaching the region first; and
single has an implicit barrier upon completion of the region, where all threads wait for synchronization, while master doesn't have any.
atomic is very similar to critical, but is restricted for a selection of simple operations.
I added these precisions since these two pairs of instructions are often the ones people tend to mix-up...
single and critical belong to two completely different classes of OpenMP constructs. single is a worksharing construct, alongside for and sections. Worksharing constructs are used to distribute a certain amount of work among the threads. Such constructs are "collective" in the sense that in correct OpenMP programs all threads must encounter them while executing and moreover in the same sequential order, also including the barrier constructs. The three worksharing constructs cover three different general cases:
for (a.k.a. loop construct) distributes automatically the iterations of a loop among the threads - in most cases all threads get work to do;
sections distributes a sequence of independent blocks of code among the threads - some threads get work to do. This is a generalisation of the for construct as a loop with 100 iterations could be expressed as e.g. 10 sections of loops with 10 iterations each.
single singles out a block of code for execution by one thread only, often the first one to encounter it (an implementation detail) - only one thread gets work. single is to a great extent equivalent to sections with a single section only.
A common trait of all worksharing constructs is the presence of an implicit barrier at their end, which barrier might be turned off by adding the nowait clause to the corresponding OpenMP construct, but the standard does not require such behaviour and with some OpenMP runtimes the barrier might continue to be there despite the presence of nowait. Incorrectly ordered (i.e. out of sequence in some of the threads) worksharing constructs might therefore lead to deadlocks. A correct OpenMP program will never deadlock when the barriers are present.
critical is a synchronisation construct, alongside master, atomic, and others. Synchronisation constructs are used to prevent race conditions and to bring order in the execution of things.
critical prevents race conditions by preventing the simultaneous execution of code among the threads in the so-called contention group. This means all threads from all parallel regions encountering similarly named critical constructs get serialised;
atomic turns certain simple memory operations into atomic ones, usually by utilising special assembly instructions. Atomics complete at once as a single non-breakable unit. For example, an atomic read from some location by one thread, which happens concurrently with an atomic write to the same location by another thread, will either return the old value or the updated value, but never some kind of an intermediate mash-up of bits from both the old and the new values;
master singles out a block of code for execution by the master thread (thread with ID of 0) only. Unlike single, there is no implicit barrier at the end of the construct and also there is no requirement that all threads must encounter the master construct. Also, the lack of implicit barrier means that master does not flush the shared memory view of the threads (this is an important but very poorly understood part of OpenMP). master is basically a shorthand for if (omp_get_thread_num() == 0) { ... }.
critical is a very versatile construct as it is able to serialise different pieces of code in very different parts of the program code, even in different parallel regions (significant in the case of nested parallelism only). Each critical construct has an optional name provided in parenthesis immediately after. Anonymous critical constructs share the same implementation-specific name. Once a thread enters such a construct, any other thread encountering another construct of the same name is put on hold until the original thread exits its construct. Then the serialisation process continues with the rest of the threads.
An illustration of the concepts above follows. The following code:
#pragma omp parallel num_threads(3)
{
foo();
bar();
...
}
results in something like:
thread 0: -----< foo() >< bar() >-------------->
thread 1: ---< foo() >< bar() >---------------->
thread 2: -------------< foo() >< bar() >------>
(thread 2 is purposely a latecomer)
Having the foo(); call within a single construct:
#pragma omp parallel num_threads(3)
{
#pragma omp single
foo();
bar();
...
}
results in something like:
thread 0: ------[-------|]< bar() >----->
thread 1: ---[< foo() >-|]< bar() >----->
thread 2: -------------[|]< bar() >----->
Here [ ... ] denotes the scope of the single construct and | is the implicit barrier at its end. Note how the latecomer thread 2 makes all other threads wait. Thread 1 executes the foo() call as the example OpenMP runtime chooses to assign the job to the first thread to encounter the construct.
Adding a nowait clause might remove the implicit barrier, resulting in something like:
thread 0: ------[]< bar() >----------->
thread 1: ---[< foo() >]< bar() >----->
thread 2: -------------[]< bar() >---->
Having the foo(); call within an anonymous critical construct:
#pragma omp parallel num_threads(3)
{
#pragma omp critical
foo();
bar();
...
}
results in something like:
thread 0: ------xxxxxxxx[< foo() >]< bar() >-------------->
thread 1: ---[< foo() >]< bar() >------------------------->
thread 2: -------------xxxxxxxxxxxx[< foo() >]< bar() >--->
With xxxxx... is shown the time a thread spends waiting for other threads executing a critical construct of the same name before it could enter its own construct.
Critical constructs of different names do not synchronise with each other. E.g.:
#pragma omp parallel num_threads(3)
{
if (omp_get_thread_num() > 1) {
#pragma omp critical(foo2)
foo();
}
else {
#pragma omp critical(foo01)
foo();
}
bar();
...
}
results in something like:
thread 0: ------xxxxxxxx[< foo() >]< bar() >---->
thread 1: ---[< foo() >]< bar() >--------------->
thread 2: -------------[< foo() >]< bar() >----->
Now thread 2 does not synchronise with the other threads because its critical construct is named differently and therefore makes a potentially dangerous simultaneous call into foo().
On the other hand, anonymous critical constructs (and in general constructs with the same name) synchronise with one another no matter where in the code they are:
#pragma omp parallel num_threads(3)
{
#pragma omp critical
foo();
...
#pragma omp critical
bar();
...
}
and the resulting execution timeline:
thread 0: ------xxxxxxxx[< foo() >]< ... >xxxxxxxxxxxxxxx[< bar() >]------------>
thread 1: ---[< foo() >]< ... >xxxxxxxxxxxxxxx[< bar() >]----------------------->
thread 2: -------------xxxxxxxxxxxx[< foo() >]< ... >xxxxxxxxxxxxxxx[< bar() >]->

OpenMP parallel region without implicit synchronisation

My code has following structure
<serial-code-1>
#pragma omp parallel
{
<parallel-code>
}
<serial-code-2>
I want to remove the implicit barrier synchronization at the end of parallel region something like nowait. so that any thread that finishes first can start doing serial-code-2 ( It will require some changes in the serial code 2) but its possible. How is it possible to achieve something like this?
Perhaps
<serial-code-1>
#pragma omp parallel
{
<parallel-code>
#pragma omp single
{
<serial-code-2>
}
}
The code inside the scope of the single directive the serial code will be executed by only one thread, probably the first one to finish executing the parallel code.

explict flush directive with OpenMP: when is it necessary and when is it helpful

One OpenMP directive I have never used and don't know when to use is flush(with and without a list).
I have two questions:
1.) When is an explicit `omp flush` or `omp flush(var1, ...) necessary?
2.) Is it sometimes not necessary but helpful (i.e. can it make the code fast)?
The main reason I can't understand when to use an explicit flush is that flushes are done implicitly after many directives (e.g. as barrier, single, ...) which synchronize the threads. I can't, for example, see way using flush and not synchronizing (e.g. with nowait) would be helpful.
I understand that different compilers may implement omp flush in different ways. Some may interpret a flush with a list as as one without (i.e. flush all shared objects) OpenMP flush vs flush(list). But I only care about what the specification requires. In other words, I want to know where an explicit flush in principle may be necessary or helpful.
Edit: I think I need to clarify my second question. Let me give an example. I would like to know if there are cases where removing an implicit flush (e.g. with nowait) and instead using an explicit flush instead but only on certain shared variables would be faster (and still give the correct result). Something like the following:
float a,b;
#pragma omp parallel
{
#pragma omp for nowait // No barrier. Do not flush on exit.
//code which uses only shared variable a
#pragma omp flush(a) // Flush only variable a rather than all shared variables.
#pragma omp for
//Code which uses both shared variables a and b.
}
I think that code still needs a barrier after the the first for loop but all barriers have an implicit flush so that defeats the purpose. Is it possible to have a barrier which does not do a flush?
The flush directive tells the OpenMP compiler to generate code to make the thread's private view on the shared memory consistent again. OpenMP usually handles this pretty well and does the right thing for typical programs. Hence, there's no need for flush.
However, there are cases where the OpenMP compiler needs some help. One of these cases is when you try to implement your own spin lock. In these cases, you would need a combination of flushes to make things work, since otherwise the spin variables will not be updated. Getting the sequence of flushes correct will be tough and will be very, very error prone.
The general recommendation is that flushes should not be used. If at all, programmers should avoid flush with a list (flush(var,...)) at all means. Some folks are actually talking about deprecating it in future OpenMP.
Performance-wise the impact of flush should be more negative than positive. Since it causes the compiler to generate memory fences and additional load/store operations, I would expect it to slow down things.
EDIT: For your second question, the answer is no. OpenMP makes sure that each thread has a consistent view on the shared memory when it needs to. If threads do not synchronize, they do not need to update their view on the shared memory, because they do not see any "interesting" change there. That means that any read a thread makes does not read any data that has been changed by some other thread. If that would be the case, then you'd have a race condition and a potential bug in your program. To avoid the race, you need to synchronize (which then implies a flush to make each participating thread's view consistent again). A similar argument applies to barriers. You use barriers to start a new epoch in the computation of a parallel region. Since you're keeping the threads in lock-step, you will very likely also have some shared state between the threads that has been computed in the previous epoch.
BTW, OpenMP may keep private data for a thread, but it does not have to. So, it is likely that the OpenMP compiler will keep variables in registers for a while, which causes them to be out of sync with the shared memory. However, updates to array elements are typically reflected pretty soon in the shared memory, since the amount of private storage for a thread is usually small (register sets, caches, scratch memory, etc.). OpenMP only gives you some weak restrictions on what you can expect. An actual OpenMP implementation (or the hardware) may be as strict as it wishes to be (e.g., write back any change immediately and to flushes all the time).
Not exactly an answer, but Michael Klemm's question is closed for comments. I think an excellent example of why flushes are so hard to understand and use properly is the following one copied (and shortened a bit) from the OpenMP Examples:
//http://www.openmp.org/wp-content/uploads/openmp-examples-4.0.2.pdf
//Example mem_model.2c, from Chapter 2 (The OpenMP Memory Model)
int main() {
int data, flag = 0;
#pragma omp parallel num_threads(2)
{
if (omp_get_thread_num()==0) {
/* Write to the data buffer that will be read by thread */
data = 42;
/* Flush data to thread 1 and strictly order the write to data
relative to the write to the flag */
#pragma omp flush(flag, data)
/* Set flag to release thread 1 */
flag = 1;
/* Flush flag to ensure that thread 1 sees S-21 the change */
#pragma omp flush(flag)
}
else if (omp_get_thread_num()==1) {
/* Loop until we see the update to the flag */
#pragma omp flush(flag, data)
while (flag < 1) {
#pragma omp flush(flag, data)
}
/* Values of flag and data are undefined */
printf("flag=%d data=%d\n", flag, data);
#pragma omp flush(flag, data)
/* Values data will be 42, value of flag still undefined */
printf("flag=%d data=%d\n", flag, data);
}
}
return 0;
}
Read the comments and try to understand.

in OpenMP, how can I make every single core runs a single thread?

I start to use OpenMP 3 days ago. I want to know how to use #pragma to make every single core runs a single thread. In more details:-
int ncores = omp_get_num_procs();
for(i = 0; i < ncores;i++){
....
}
I want this for loop to be distributed in the cores I have so, what #pragma I should use?
another thing, what are those #pragmas mean?
#pragma omp parallel
#pragma omp for
#pragma omp parallel for
I got little confused with those #pragmas
thank you alot .. :)
Thread Pinning
I want to know how to use #pragma to make every single core runs a
single thread.
Which openmp implementation do you use? The answer depends on that.
Pinning is not defined with pragmas. You will have to use environment variables. When using gcc, one can use an environment variable to pin threads to cores:
GOMP_CPU_AFFINITY="0-3" ./main
binds the first thread to the first core, the second thread to the second, and so on. See the gomp documentation for more information (section 3, Environment Variables). I forgot how to do the same thing with PGI and other compilers, but you should be able to find the answer for those compilers using a popular search engine.
OpenMP Pragmas
There's no way to avoid reading documentation. See this link to an IBM website for example. I found the tutorial by Blaise Barney quite useful.
To add to the previous answer, the equivalent environment variable in the Intel OpenMP library is KMP_AFFINITY. A similar usage to
GOMP_CPU_AFFINITY="0-3"
would be
KMP_AFFINITY="proclist=[0-3]"
Full details for the KMP_AFFINITY syntax and options are here:
http://software.intel.com/sites/products/documentation/studio/composer/en-us/2009/compiler_c/optaps/common/optaps_openmp_thread_affinity.htm
The new OpenMP implementation (3.0+) makes your life much easier. You can simply add the following line to your .bashrc or .bash_profile.
export OMP_PROC_BIND=true

OpenMP C and C++ cout/printf does not give the same output

I am a complete noob in OpenMP and just started by exploring some simple test script below.
#pragma omp parallel
{
#pragma omp for
for(int i=0;i<10;++i)
std::cout<<i<<" "<<endl;
// printf("%d \n",i);
}
}
I tried the C and C++ version and the C version seems to work fine whereas the C++ version gives me a wrong output.
Many implementations of printf acquire a lock to ensure that each printf call is not interrupted by other threads.
In contrast, std::cout's overloaded << operator means that (even with a lock) one thread's printing of i and ' ' and '\n' can be interleaved with another thread's output, because std::cout<<i<<" "<<endl; is translated to three operator<<() function calls by the C++ compiler.
This is outdated but perhaps this could be still of help to anyone:
It's not really clear what you expect the output to be but be aware of:
Your variable "i" is possibly shared amongst threads. You have a race-condition for the contents of "i". One thread needs to wait for another when it wants to access "i". Further 1 thread can change "i" and another thread doesn't take note of it meaning it will output a wrong value.
The endl() flushes the memory after ending the line. If you use \n for newline the effect is similar but without the flush. And std is an object too so multiple threads race for std access. When the memory isn't flushed after every access you may experience interferences.
To make sure those are not related to your problems you could declare the "i" as private so every thread counts "i" itself. And you could play with flushing the memory at output so see if it has to do with the problem you experience.

Resources