OpenMP reduction on SSE2 vector - openmp

I want to compute the average of an image (3 channels of interest + 1 alpha channel we ignore here) for each channel using SSE2 intrinsics. I tried that:
__m128 average = _mm_setzero_ps();
#pragma omp parallel for reduction(+:average)
for(size_t k = 0; k < roi_out->height * roi_out->width * ch; k += ch)
{
float *in = ((float *)temp) + k;
average += _mm_load_ps(in);
}
But I get this error with GCC: user-defined reduction not found for average.
Is that possible with SSE2 ? What's wrong ?
Edit
This works:
float sum[4] = { 0.0f };
#pragma omp parallel for simd reduction(+:sum[:4])
for(size_t k = 0; k < roi_out->height * roi_out->width * ch; k += ch)
{
float *in = ((float *)temp) + k;
for (int i = 0; i < ch; ++i) sum[i] += in[i];
}
const __m128 average = _mm_load_ps(sum) / ((float)roi_out->height * roi_out->width);

You can user-define a custom reduction like this:
#pragma omp declare reduction \
(addps:__m128:omp_out+=omp_in) \
initializer(omp_priv=_mm_setzero_ps())
And then use it like:
#pragma omp parallel for reduction(addps:average)
for(size_t k = 0; k < size * ch; k += ch)
{
average += _mm_loadu_ps(data+k);
}
I think, most importantly, openmp needs to know how to get a neutral element (here _mm_setzero_ps()) for your reduction.
Full working example: https://godbolt.org/z/Fpqttc
Interesting link: http://pages.tacc.utexas.edu/~eijkhout/pcse/html/omp-reduction.html#User-definedreductions

Related

Illegal context for vector clause in simple OpenACC kernel

I'm trying to compile a simple OpenACC benchmark:
void foo(const float * restrict a, int a_stride, float * restrict c, int c_stride) {
#pragma acc parallel copyin(a[0:a_stride*256]) copyout(c[0:c_stride*256])
#pragma acc loop vector(128)
{
for (int i = 0; i < 256; ++i) {
float sum = 0;
for (int j = 0; j < 256; ++j) {
sum += *(a + a_stride * i + j);
}
*(c + c_stride * i) = sum;
}
}
}
with Nvidia HPC SDK 21.5 and run into an error
$ nvc++ -S tmp.cc -Wall -Wextra -O2 -acc -acclibs -Minfo=all -g -gpu=cc80
NVC++-S-0155-Illegal context for gang(num:) or worker(num:) or vector(length:) (tmp.cc: 7)
NVC++/x86-64 Linux 21.5-0: compilation completed with severe errors
Any idea what may cause this? From what I can tell my syntax for vector(128) is legal.
It's illegal OpenACC syntax to use "vector(value)" with a parallel construct. You need to use a "vector_length" clause on the parallel directive to define the vector length. The reason is because "parallel" defines a single compute region to be offloaded and hence all vector loops in this region need to have the same vector length.
You can use "vector(value)" only with a "kernels" construct since the compiler can then split the region into multiple kernels each having a different vector length.
Option 1:
% cat test.c
void foo(const float * restrict a, int a_stride, float * restrict c, int c_stride) {
#pragma acc parallel vector_length(128) copyin(a[0:a_stride*256]) copyout(c[0:c_stride*256])
#pragma acc loop vector
{
for (int i = 0; i < 256; ++i) {
float sum = 0;
for (int j = 0; j < 256; ++j) {
sum += *(a + a_stride * i + j);
}
*(c + c_stride * i) = sum;
}
}
}
% nvc -acc -c test.c -Minfo=accel
foo:
4, Generating copyout(c[:c_stride*256]) [if not already present]
Generating copyin(a[:a_stride*256]) [if not already present]
Generating Tesla code
5, #pragma acc loop vector(128) /* threadIdx.x */
7, #pragma acc loop seq
5, Loop is parallelizable
7, Loop is parallelizable
Option 2:
% cat test.c
void foo(const float * restrict a, int a_stride, float * restrict c, int c_stride) {
#pragma acc kernels copyin(a[0:a_stride*256]) copyout(c[0:c_stride*256])
#pragma acc loop independent vector(128)
{
for (int i = 0; i < 256; ++i) {
float sum = 0;
for (int j = 0; j < 256; ++j) {
sum += *(a + a_stride * i + j);
}
*(c + c_stride * i) = sum;
}
}
}
% nvc -acc -c test.c -Minfo=accel
foo:
4, Generating copyout(c[:c_stride*256]) [if not already present]
Generating copyin(a[:a_stride*256]) [if not already present]
5, Loop is parallelizable
Generating Tesla code
5, #pragma acc loop gang, vector(128) /* blockIdx.x threadIdx.x */
7, #pragma acc loop seq
7, Loop is parallelizable

Using OpenMP "for simd" in matrix-vector multiplication?

I'm currently trying to get my matrix-vector multiplication function to compare favorably with BLAS by combining #pragma omp for with #pragma omp simd, but it's not getting any speedup improvement than if I were to just use the for construct. How do I properly vectorize the inner loop with OpenMP's SIMD construct?
vector dot(const matrix& A, const vector& x)
{
assert(A.shape(1) == x.size());
vector y = xt::zeros<double>({A.shape(0)});
int i, j;
#pragma omp parallel shared(A, x, y) private(i, j)
{
#pragma omp for // schedule(static)
for (i = 0; i < y.size(); i++) { // row major
#pragma omp simd
for (j = 0; j < x.size(); j++) {
y(i) += A(i, j) * x(j);
}
}
}
return y;
}
Your directive is incorrect because there would introduce in a race condition (on y(i)). You should use a reduction in this case. Here is an example:
vector dot(const matrix& A, const vector& x)
{
assert(A.shape(1) == x.size());
vector y = xt::zeros<double>({A.shape(0)});
int i, j;
#pragma omp parallel shared(A, x, y) private(i, j)
{
#pragma omp for // schedule(static)
for (i = 0; i < y.size(); i++) { // row major
decltype(y(0)) sum = 0;
#pragma omp simd reduction(+:sum)
for (j = 0; j < x.size(); j++) {
sum += A(i, j) * x(j);
}
y(i) += sum;
}
}
return y;
}
Note that it may not be necessary faster because some compilers are able to automatically vectorize the code (ICC for example). GCC and Clang often fail to perform (advanced) SIMD reductions automatically and such a directive help them a bit. You can check the assembly code to check how the code is vectorized or enable vectorization reports (see here for GCC).

OpenMP actual number of threads

The below code is based on the video tutorials by Tim Mattson on YouTube.
I would like to find out the number of threads I actually receive when calling parallel (it is possible that I have requested 256 threads but only ended up with 8).
The usual omp_get_num_threads() does not work with the below (if I wanted to create a code block I get an expected a for loop following OpenMP 'directive' directive error):
void pi_with_omp() {
int i;
double x, pi, sum = 0.0;
double start_time, run_time;
step = 1.0 / (double)num_steps;
omp_set_num_threads(NUM_THREADS);
start_time = omp_get_wtime();
#pragma omp parallel for reduction(+:sum) private(x)
for (i = 0; i < num_steps; i++) {
x = (i + 0.5) * step;
sum += 4.0 / (1.0 + x * x);
}
pi = step * sum;
run_time = omp_get_wtime() - start_time;
printf("\n pi with %ld steps is %lf in %lf seconds", num_steps, pi, run_time);
}
The only way I have found is to rewrite the above pragma and dissect it into two like the following:
int nthreads;
#pragma omp parallel
{
double x;
int id, nthrds;
id = omp_get_thread_num();
nthrds = omp_get_num_threads();
if (id == 0) nthreads = nthrds;
#pragma omp for reduction(+:sum)
for (i = 0; i < num_steps; i++) {
x = (i + 0.5) * step;
sum = sum + 4.0 / (1.0 + x * x);
}
}
Which does the job but is not pretty. Has anyone got a better solution?
You can simplify your code, but you will still need to separate the parallel and the for.
int nthreads;
#pragma omp parallel
{
#pragma omp single nowait
nthreads = omp_get_num_threads();
#pragma omp for reduction(+:sum)
for (i = 0; i < num_steps; i++) {
double x = (i + 0.5) * step;
sum = sum + 4.0 / (1.0 + x * x);
}
}

How to distribute teams on GPU using OpenMP?

i'm trying to utilize my Nvidia Geforce GT 740M for parallel-programming using OpenMP and the clang-3.8 compiler.
When processed in parallel on the CPU, I manage to get the desired result. However, when processed on the GPU, my results are some almost random numbers.
Therefore, I figured that I'm not correctly distributing my thread teams and that there might be some data races. I guess I have to do my for-loops differently but I have no idea where the mistake could be.
#include <stdio.h>
#include <stdlib.h>
#include <omp.h>
int main(int argc, char* argv[])
{
const int n =100; float a = 3.0f; float b = 2.0f;
float *x = (float *) malloc(n * sizeof(float));
float *y = (float *) malloc(n * sizeof(float));
int i;
int j;
int k;
double start;
double end;
start = omp_get_wtime();
for (k=0; k<n; k++){
x[k] = 2.0f;
y[k] = 3.0f;
}
#pragma omp target data map(to:x[0:n]) map(tofrom:y[0:n]) map(to:i) map(to:j)
{
#pragma omp target teams
#pragma omp distribute
for(i = 0; i < n; i++) {
#pragma omp parallel for
for (j = 0; j < n; j++){
y[j] = a*x[j] + y[j];
}
}
}
end = omp_get_wtime();
printf("Work took %f seconds.\n", end - start);
free(x); free(y);
return 0;
}
I guess that it might have something to to with the Architecture of my GPU. So therefore I'm adding this:
Im fairly new to the topic, so thanks for your help :)
Yes, there is a race here. Different teams are reading and writing to the same element of the array 'y'. Perhaps you want something like this?
for(i = 0; i < n; i++) {
#pragma omp target teams distribute parallel for
for (j = 0; j < n; j++){
y[j] = a*x[j] + y[j];
}
}

gcc openmp thread reuse

I am using gcc's implementation of openmp to try to parallelize a program. Basically the assignment is to add omp pragmas to obtain speedup on a program that finds amicable numbers.
The original serial program was given(shown below except for the 3 lines I added with comments at the end). We have to parallize first just the outer loop, then just the inner loop. The outer loop was easy and I get close to ideal speedup for a given number of processors. For the inner loop, I get much worse performance than the original serial program. Basically what I am trying to do is a reduction on the sum variable.
Looking at the cpu usage, I am only using ~30% per core. What could be causing this? Is the program continually making new threads everytime it hits the omp parallel for clause? Is there just so much more overhead in doing a barrier for the reduction? Or could it be memory access issue(eg cache thrashing)? From what I read with most implementations of openmp threads get reused overtime(eg pooled), so I am not so sure the first problem is what is wrong.
#include<stdio.h>
#include<stdlib.h>
#include<math.h>
#include <omp.h>
#define numThread 2
int main(int argc, char* argv[]) {
int ser[29], end, i, j, a, limit, als;
als = atoi(argv[1]);
limit = atoi(argv[2]);
for (i = 2; i < limit; i++) {
ser[0] = i;
for (a = 1; a <= als; a++) {
ser[a] = 1;
int prev = ser[a-1];
if ((prev > i) || (a == 1)) {
end = sqrt(prev);
int sum = 0;//added this
#pragma omp parallel for reduction(+:sum) num_threads(numThread)//added this
for (j = 2; j <= end; j++) {
if (prev % j == 0) {
sum += j;
sum += prev / j;
}
}
ser[a] = sum + 1;//added this
}
}
if (ser[als] == i) {
printf("%d", i);
for (j = 1; j < als; j++) {
printf(", %d", ser[j]);
}
printf("\n");
}
}
}
OpenMP thread teams are instantiated on entering the parallel section. This means, indeed, that the thread creation is repeated every time the inner loop is starting.
To enable reuse of threads, use a larger parallel section (to control the lifetime of the team) and specificly control the parallellism for the outer/inner loops, like so:
Execution time for test.exe 1 1000000 has gone down from 43s to 22s using this fix (and the number of threads reflects the numThreads defined value + 1
PS Perhaps stating the obvious, it would not appear that parallelizing the inner loop is a sound performance measure. But that is likely the whole point to this exercise, and I won't critique the question for that.
#include<stdio.h>
#include<stdlib.h>
#include<math.h>
#include <omp.h>
#define numThread 2
int main(int argc, char* argv[]) {
int ser[29], end, i, j, a, limit, als;
als = atoi(argv[1]);
limit = atoi(argv[2]);
#pragma omp parallel num_threads(numThread)
{
#pragma omp single
for (i = 2; i < limit; i++) {
ser[0] = i;
for (a = 1; a <= als; a++) {
ser[a] = 1;
int prev = ser[a-1];
if ((prev > i) || (a == 1)) {
end = sqrt(prev);
int sum = 0;//added this
#pragma omp parallel for reduction(+:sum) //added this
for (j = 2; j <= end; j++) {
if (prev % j == 0) {
sum += j;
sum += prev / j;
}
}
ser[a] = sum + 1;//added this
}
}
if (ser[als] == i) {
printf("%d", i);
for (j = 1; j < als; j++) {
printf(", %d", ser[j]);
}
printf("\n");
}
}
}
}

Resources