OpenMP: what is the difference between "taskloop" and "omp for" performance wise? - parallel-processing

"taskloop" is introduced in OpenMP 4.5. It can take clauses from both loop and task constructs (except depend clause AFAIK).
However, I'm wondering if "taskloop" and "omp for" constructs differ performance wise too.

I think it may depends on the actual problem. To parallelize a for loop omp for can be faster than tasks, because it offers several different scheduling scheme for your needs. In my experience (solving a particular problem using clang12 compiler) omp for produces a bit faster code than tasks (on Ryzen 5 7800X).

Related

Is batching same functions with SIMD instruction possible?

I have a scenario that many exact same functions(for simplicity let's just consider C/C++ and python here) will be executed at the same time on my machine. Intuitively I just use multi-threading to treat each instance of a function as a thread to utilize the parallism, they do not contend for same resources but they will do many branch operation(e.g., for loop). However, since they are actually the same functions, I'm thinking about batching them using some SIMD instructions, e.g., AVX-512. Of course, it should be automatic so that users do not have to modify their code.
The reason? Because every thread/process/container/VM occupies resources, but AVX only needs one instructions. So I can hold more users with the same hardware.
Most articles I find online focus on using AVX instructions inside the function, for example, to accelerate the stream data processing, or deal with some large calculation. None of them mentions batching different instances of same function.
I know there are some challenges, such as different execution path caused by different input, and it is not easy to turn a normal function into a batched version automatically, but I think it is indeed possible technically.
Here are my questions
Is it hard(or possible) to automatically change a normal function into a batched version?
If 1 is no, what restrictions should I put on the function to make it possible? For example, if the function only has one path regardless of the data?
Is there other technologies to better solve the problem? I don't think GPU is a good option to me because GPU cannot support IO or branch instruction, although its SIMT fits perfectly into my goal.
Thanks!
SSE/AVX is basically a vector unit, it allows simple operations (like +-*/ and,or,XOR etc) on arrays of multiple elements at once. AVX1 and 2 has 256 byte registers, so you can do e.g. 8 32-bit singles at once, or 4 doubles. AVX-512 is coming but quite rare atm.
So if your functions are all operations on arrays of basic types, it is a natural fit. Rewriting your function using AVX intrinsics is doable if the operations are very simple. Complex things (like not matching vector widths) or even doing it in assembler is a challenge though.
If your function is not operating on vectors then it becomes difficult, and the possibilities are mostly theoretical. Autovectorizing compilers sometimes can do this, but it s rare and limited, and extremely complex.
There's two ways to fix this: vectorization (SIMD) and parallelization (threads).
GCC can already do the SIMD vectorization you want provided that the function is inlined, and the types and operations are compatible (and it will automatically inline smallish functions without you asking it to).
E.g.
inline void func (int i) {
somearray[i] = someotherarray[i] * athirdarray[i];
}
for (int i = 0; i < ABIGNUMBER; i++)
func (i);
Vectorization and inlining are enabled at -O3.
If the functions are too complex, and/or GCC doesn't vectorize it yet, then you can use OpenMP or OpenACC to parallelize it.
OpenMP uses special markup to tell the compiler where to spawn threads.
E.g.
#pragma omp parallel
#pragma omp for
for (int i = 0; i < ABIGNUMBER; i++)
....
And yes, you can do that on a GPU too! You do have to do a bit more typing to get the data copied in and out correctly. Only the marked up areas run on the GPU. Everything else runs on the CPU, so I/O etc. is not a problem.
#pragma omp target map(somearray,someotherarray,athirdarray)
#pragma omp parallel
#pragma omp for
for (int i = 0; i < ABIGNUMBER; i++)
....
OpenACC is a similar idea, but more specialized towards GPUs.
You can find OpenMP and OpenACC compilers in many places. Both GCC and LLVM support NVidia GPUs. LLVM has some support for AMD GPUs, and there are unofficial GCC builds available too (with official support coming soon).

Alternative for dynamic parallelism for CUDA

I am very new to the CUDA programming model and programming in general, I suppose. I'm attempting to parallelize an expectation maximization algorithm. I am working on a gtx 480 which has compute capability 2.0. At first, I sort of assumed that there's no reason for the device to launch its own threads, but of course, I was sadly mistaken. I came across this pdf.
http://docs.nvidia.com/cuda/pdf/CUDA_Dynamic_Parallelism_Programming_Guide.pdf
Unfortunately, dynamic parallelism only works on the latest and greatest GPUs, with compute capability 3.5. Without diving into too much specifics, what is the alternative to dynamic parallelism? The loops in the CPU EM algorithm have many dependencies and are highly nested, which seems to make dynamic parallelism an attractive ability. I'm not sure if my question makes sense so please ask if you need clarification.
Thank you!
As indicated by #JackOLantern, dynamic parallelism can be described in a nutshell as the ability to call a kernel (i.e. a __global__ function) from device code (a __global__ or __device__ function).
Since the kernel call is the principal method by which the machine spins up multiple threads in response to a single function call, there is really no direct alternative that provides all the capability of dynamic parallelism in a device that does not support it (ie. pre cc 3.5 devices).
Without dynamic parallelism, your overall code will almost certainly involve more synchronization and communication between CPU code and GPU code.
The principal method would be to realize some unit of your code as parallelizable, convert it to a kernel, and work through your code in essentially a non-nested fashion. Repetetive functions might be done via looping in the kernel, or else looping in the host code that calls the kernel.
For a pictorial example of what I am trying to describe, please refer to slide 14 of this deck which introduces some of the new features of CUDA 5 including dynamic parallelism. The code architecture on the right is an algorithm realized with dynamic parallelism. The architecture on the left is the same function realized without dynamic parallelism.
I have checked your algorithm in Wikipedia and I'm not sure you need dynamic parallelism at all.
You do the expectation step in your kernel, __syncthreads(), do the maximization step, and __syncthreads() again. From this distance, the expectation looks like a reduction primitive, and the maximization is a filter one.
If it doesn't work, and you need real task parallelism, a GPU may not be the best choice. While the Kepler GPUs can do that to some degree, this is not what this architecture is designed for. In that case you might be better off using a multi-CPU system, such as an office grid, a supercomputer, or a Xeon Phi accelerator. You should also check OpenMP and MPI, these are the languages used for task-parallel programming (actually OpenMP is just a handful of pragmas in most cases).

Suggest an OpenMP program that has noticeble speedup and the most important concepts in it for a talk

I am going to have a lecture on OpenMP and I want to write an program using OpenMP lively . What program do you suggest that has the most important concept of OpenMP and has noticeable speedup? I want an awesome program example, please help me all of you that you are expert in OpenMP
you know I am looking for an technical and Interesting example with nice output.
I want to write two program lively , first one for better illustration of most important OpenMP concept and has impressive speedup and second-one as a hands-on that everyone must write that code at the same time
my audience may be very amateur
Personally I wouldn't say that the most impressive aspect of OpenMP is the scalability of the codes you can write with it. I'd say that a more impressive aspect is the ease with which one can take an existing serial program and, with only a few OpenMP directives, turn it into a parallel program with satisfactory scalability.
So I'd suggest that you take any program (or part of any program) of interest to your audience, better yet a program your audience is familiar with, and parallelise it right there and then in your lecture, lively as you put it. I'd be impressed if a lecturer could show me, say, a 4 times speedup on 8 cores with 5 minutes coding and a re-compilation. And that leads on to all sorts of interesting topics about why you don't (always, easily) get 8 times speedup on 8 cores.
Of course, like all stage illusionists, you'll have to choose your example carefully and rehearse to ensure that you do get an impressive-enough speedup to support your argument.
Personally I'd be embarrassed to use an embarrassingly parallel program for such a demo; the more perceptive members of the audience might be provoked into a response such as meh.
(1) Matrix multiply
Perhaps it's the most simple example (though matrix addition would be simpler).
(2) Mandelbrot
http://en.wikipedia.org/wiki/Mandelbrot_set
Mandelbrot is also embarrassingly parallel, and OpenMP can achieve decent speedups. You can even use graphics to visualize it. Mandelbrot is also an interesting example because it has workload imbalance. You may see different speedups based on scheduling policies (e.g., schedule(dynamic,1) vs. schedule(static)), and different threading libraries (e.g., Cilk Plus or TBB).
(3) A couple of mathematical kernels
For example, FFT (non-recursive version) is also embarrassingly parallelized.
Take a look at "OmpSCR" benchmarks: http://sourceforge.net/projects/ompscr/ This suite has simple OpenMP examples.

Performance optimization in CUDA - Which of these algorithms should I use?

I have an algorithm which consists two major tasks. Both tasks are embarrassingly parallel. So I can port this algorithm on CUDA by one of the following way.
>Kernel<<<
Block,Threads>>>() \\\For task1
cudaThreadSynchronize();
>Kerne2<<<
Block,Threads>>>() \\\For task2
Or I can do following thing.
>Kernel<<<
Block,Threads>>>()
{
1.Threads work on task 1.
2.syncronizes across device.
3.Start for task 2.
}
One can note that in first method, we'll have to come back to CPU while in second trend we'll have to use synchronization across all blocks in CUDA. Paper in IPDPS 10 says that second method, with proper care can perform better. But in general which method should be followed?
There is not currently any officially supported method for synchronizing across thread blocks withing a single kernel execution in the CUDA programming model. Methods of doing so, in my experience, lead to brittle code that can lead to incorrect behavior under changing circumstances such as running on different hardware, changing driver and CUDA release versions, etc.
Just because something is published in an academic publication does not mean it is a safe idea for production code.
I recommend you stick with your method 1, and I ask you this: have you determined that separating your computation into two separate kernels is really causing a performance problem? Is the cost of a second kernel launch definitely the bottleneck?

Parallel STL algorithms in OS X

I working on converting an existing program to take advantage of some parallel functionality of the STL.
Specifically, I've re-written a big loop to work with std::accumulate. It runs, nicely.
Now, I want to have that accumulate operation run in parallel.
The documentation I've seen for GCC outline two specific steps.
Include the compiler flag -D_GLIBCXX_PARALLEL
Possibly add the header <parallel/algorithm>
Adding the compiler flag doesn't seem to change anything. The execution time is the same, and I don't see any indication of multiple core usage when monitoring the system.
I get an error when adding the parallel/algorithm header. I thought it would be included with the latest version of gcc (4.7).
So, a few questions:
Is there some way to definitively determine if code is actually running in parallel?
Is there a "best practices" way of doing this on OS X? (Ideal compiler flags, header, etc?)
Any and all suggestions are welcome.
Thanks!
See http://threadingbuildingblocks.org/
If you only ever parallelize STL algorithms, you are going to disappointed in the results in general. Those algorithms generally only begin to show a scalability advantage when working over very large datasets (e.g. N > 10 million).
TBB (and others like it) work at a higher level, focusing on the overall algorithm design, not just the leaf functions (like std::accumulate()).
Second alternative is to use OpenMP, which is supported by both GCC and
Clang, though is not STL by any means, but is cross-platform.
Third alternative is to use Grand Central Dispatch - the official multicore API in OSX, again hardly STL.
Forth alternative is to wait for C++17, it will have Parallelism module.

Resources