I have the following 4x4 matrix-vector multiply code:
double const __restrict__ a[16];
double const __restrict__ x[4];
double __restrict__ y[4];
//#pragma GCC unroll 1 - does not work either
#pragma GCC nounroll
for ( int j = 0; j < 4; ++j )
{
double const* __restrict__ aj = a + j * 4;
double const xj = x[j];
#pragma GCC ivdep
for ( int i = 0; i < 4; ++i )
{
y[i] += aj[i] * xj;
}
}
I compile with -O3 -mavx flags. The inner loop is vectorized (single FMAD). However, gcc (7.2) keeps unrolling the outer loop 4 times, unless I use -O2 or lower optimization.
Is there a way to override -O3 unrolling of a particular loop?
NB. Similar #pragma nounroll works if I use Intel icc.
According to the documentation, #pragma GCC unroll 1 is supposed to work, if you place it just so. If it doesn't then you should submit a bug report.
Alternatively, you can use a function attribute to set optimizations, I think:
void myfn () __attribute__((optimize("no-unroll-loops")));
For concise functions
sans full and partial loop unrolling
when required
the following function attribute
please try.
__attribute__((optimize("Os")))
Related
I'm benchmarking different matrix multiply forms with different optimization levels (for teaching purposes) and I detected a strange behavior in gcc autovectorization. It fails to vectorize when arrays are parameters (see mxmp) but is able to vectorize when arrays are global variables (see mxmg)
gcc version 7.4.0 (Ubuntu 7.4.0-1ubuntu1~18.04.1)
but behaviour was the same with older gcc versions
Compiling options:
gcc -O3 -mavx2 -mfma
#define N 1024
float A[N][N], B[N][N], C[N][N];
void mxmp(float A[N][N], float B[N][N], float C[N][N]) {
int i,j,k;
for (i=0; i<N; i++)
for (j=0; j<N; j++)
for (k=0; k<N; k++)
C[i][j] = C[i][j] + A[i][k] * B[k][j];
}
void mxmg() {
int i,j,k;
for (i=0; i<N; i++)
for (j=0; j<N; j++)
for (k=0; k<N; k++)
C[i][j] = C[i][j] + A[i][k] * B[k][j];
}
main(){
mxmg();
mxmp(A, B, C);
}
I expected the compiler to do the same in both functions however mxmp requires about 10 times the execution time of mxmg. Exploring the assembly code it just happens that gcc is able to autovectorize mxmg (when arrays are global variables) but fails to vectorize mxmp (where arrays are parameters).
Tried the same with kij form and it's able to vectorize both functions.
I need help to discover why gcc has this behavior. And how to help gcc (pragmas, compile options, atributes, ...) to properly vectorize mxmp function.
Thanks
When the arrays are global, the compiler can easily see that they are disjoint memory regions. When they are function parameters, you could call mxmp(A,A,A), so it has to assume that writing to C may modify A or B, which could affect later iterations and complicates vectorization. Of course the compiler could inline or do other things to know it in your particular case...
You can explicitly specify the lack of aliasing with restrict:
void mxmp(float A[restrict N][N], float B[restrict N][N], float C[restrict N][N]) {
I would like to understand why GCC does not autovectorize the following loop, unless I pass the -ffinite-math-only. As to my understanding and the GCC manual the optimization requires the -funsafe-math-optimizations
If the selected floating-point hardware includes the NEON extension (e.g. -mfpu=neon), note that floating-point operations are not generated by GCC's auto-vectorization pass unless -funsafe-math-optimizations is also specified. This is because NEON
hardware does not fully implement the IEEE 754 standard for floating-point arithmetic (in particular denormal values are treated as zero), so the use of NEON instructions may lead to a loss of precision.
In particular, the flag enables the compiler to assume associative math, so that it can first accumulate with 4 partial sums. The code seems pretty straight forward
template<typename SumType = double>
class UipLineResult {
public:
SumType sqsum;
SumType dcsum;
float pkp;
float pkn;
public:
UipLineResult() {
clear();
}
void clear() {
sqsum = 0;
dcsum = 0;
pkp = -std::numeric_limits<float>::max();
pkn = +std::numeric_limits<float>::max();
}
};
Loop that is not vectorized
static void addSamplesLine(const float* ss, UipLineResult<>* line) {
UipLineResult<float> intermediate;
for(int idx = 0; idx < 120; idx++) {
float s = ss[idx];
intermediate.sqsum += s * s;
intermediate.dcsum += s;
intermediate.pkp = intermediate.pkp < s ? s : intermediate.pkp;
intermediate.pkn = intermediate.pkn > s ? s : intermediate.pkn;
}
line->addIntermediate(&intermediate);
}
For example, the squared addition look like
intermediate.sqsum += s * s;
107da: ee47 6aa7 vmla.f32 s13, s15, s15
With -ffinite-math-only this becomes
intermediate.sqsum += s * s;
1054c: ef40 6df0 vmla.f32 q11, q8, q8
Compiler flags
-funsafe-math-optimizations -ffinite-math-only -mcpu=cortex-a9 -mfpu=neon
I am trying to sum all the elements of an array which is initialized in the same code. As every element is independent from each other, I tried to perform the sum in parallel. My code is shown below:
int main(int argc, char** argv)
{
cout.precision(20);
double sumre=0.,Mre[11];
int n=11;
for(int i=0; i<n; i++)
Mre[i]=2.*exp(-10*M_PI*i/(1.*n));
#pragma omp parallel for reduction(+:sumre)
for(int i=0; i<n; i++)
{
sumre+=Mre[i];
}
cout<<sumre<<"\n";
}
which I compile and run with:
g++ -O3 -o sum sumparallel.cpp -fopenmp
./sum
respectively. My problem is that the output differs every time I run it. Sometimes it gives
2.1220129388411006488
or
2.1220129388411002047
Does anyone have an idea what is happening here?
Some of these comments hint at the problem here, but there could be two distinct issues
Double precision numbers do not have 20 decimal precision
If you want to print the maximum precision of sumre, use something like this
#include <float.h>
int maint(int argc, char* argv[])
{
...
printf("%.*g", DBL_DECIMAL_DIG, number);
return 0;
}
Floating point arithmetic is non-commutative
The effect of this property is roundoff error. In fact, the function that you have defined, a gaussian, is especially prone to roundoff for summation. Considering that workload distribution of OpenMP parallel for is undefined, you may receive different answers when you run it. To get around this you may use the kahan summation algorithm. An implementation with OpenMP would look something like this:
...
double sum = 0.0, c = 0.0;
#pragma omp parallel for reduction(+:sum, +:c)
for(i = 0; i < n; i++)
{
double y = Mre[i] - c;
double t = sum + y;
c = (t - sum) - y;
sum = t;
}
sum = sum - c;
...
I'm trying to learn how to exploit vectorization with gcc. I followed this tutorial of Erik Holk ( with source code here )
I just modified it to double. I used this dotproduct to compute multiplication of randomly generated square matrices 1200x1200 of doubles ( 300x300 double4 ). I checked that the results are the same. But what really surprised me is, that the simple dotproduct was actually 10x faster than my manually vectorized.
maybe, double4 is too big for SSE ( it would need AVX2 ? ) But I would expect that even in case when gcc cannot find suitable instruction for dealing with double4 at once, it would still be able to exploit the explicit information that data are in big chunks for auto-vectorization.
Details:
the results was:
dot_simple:
time elapsed 1.90000 [s] for 1.728000e+09 evaluations => 9.094737e+08 [ops/s]
dot_SSE:
time elapsed 15.78000 [s] for 1.728000e+09 evaluations => 1.095057e+08 [ops/s]
I used gcc 4.6.3 on Intel® Core™ i5 CPU 750 # 2.67GHz × 4 with these options -std=c99 -O3 -ftree-vectorize -unroll-loops --param max-unroll-times=4 -ffast-math or with just -O2
( the result was the same )
I did it using python/scipy.weave() for convenience, but I hope it doesn't change anything
The code:
double dot_simple( int n, double *a, double *b ){
double dot = 0;
for (int i=0; i<n; i++){
dot += a[i]*b[i];
}
return dot;
}
and that one using explicitly gcc vector extensiobns
double dot_SSE( int n, double *a, double *b ){
const int VECTOR_SIZE = 4;
typedef double double4 __attribute__ ((vector_size (sizeof(double) * VECTOR_SIZE)));
double4 sum4 = {0};
double4* a4 = (double4 *)a;
double4* b4 = (double4 *)b;
for (int i=0; i<n; i++){
sum4 += *a4 * *b4 ;
a4++; b4++;
//sum4 += a4[i] * b4[i];
}
union { double4 sum4_; double sum[VECTOR_SIZE]; };
sum4_ = sum4;
return sum[0]+sum[1]+sum[2]+sum[3];
}
Then I used it for multiplication of 300x300 random matrix to measure performance
void mmul( int n, double* A, double* B, double* C ){
int n4 = n*4;
for (int i=0; i<n4; i++){
for (int j=0; j<n4; j++){
double* Ai = A + n4*i;
double* Bj = B + n4*j;
C[ i*n4 + j ] = dot_SSE( n, Ai, Bj );
//C[ i*n4 + j ] = dot_simple( n4, Ai, Bj );
ijsum++;
}
}
}
scipy weave code:
def mmul_2(A, B, C, __force__=0 ):
code = r''' mmul( NA[0]/4, A, B, C ); '''
weave_options = {
'extra_compile_args': ['-std=c99 -O3 -ftree-vectorize -unroll-loops --param max-unroll-times=4 -ffast-math'],
'compiler' : 'gcc', 'force' : __force__ }
return weave.inline(code, ['A','B','C'], verbose=3, headers=['"vectortest.h"'],include_dirs=['.'], **weave_options )
One of the main problems is that in your function dot_SSE you loop over n items when you should only loop over n/2 items (or n/4 with AVX).
To fix this with GCC's vector extensions you can do this:
double dot_double2(int n, double *a, double *b ) {
typedef double double2 __attribute__ ((vector_size (16)));
double2 sum2 = {};
int i;
double2* a2 = (double2*)a;
double2* b2 = (double2*)b;
for(i=0; i<n/2; i++) {
sum2 += a2[i]*b2[i];
}
double dot = sum2[0] + sum2[1];
for(i*=2;i<n; i++) dot +=a[i]*b[i];
return dot;
}
The other problem with your code is that it has a dependency chain. Your CPU can do a simultaneous SSE addition and multiplication but only for independent data paths. To fix this you need to unroll the loop. The following code unrolls the loop by 2 (but you probably need to unroll by three for the best results).
double dot_double2_unroll2(int n, double *a, double *b ) {
typedef double double2 __attribute__ ((vector_size (16)));
double2 sum2_v1 = {};
double2 sum2_v2 = {};
int i;
double2* a2 = (double2*)a;
double2* b2 = (double2*)b;
for(i=0; i<n/4; i++) {
sum2_v1 += a2[2*i+0]*b2[2*i+0];
sum2_v2 += a2[2*i+1]*b2[2*i+1];
}
double dot = sum2_v1[0] + sum2_v1[1] + sum2_v2[0] + sum2_v2[1];
for(i*=4;i<n; i++) dot +=a[i]*b[i];
return dot;
}
Here is a version using double4 which I think is really what you wanted with your original dot_SSE function. It's ideal for AVX (though it still needs to be unrolled) but it will still work with SSE2 as well. In fact with SSE it seems GCC breaks it into two chains which effectively unrolls the loop by 2.
double dot_double4(int n, double *a, double *b ) {
typedef double double4 __attribute__ ((vector_size (32)));
double4 sum4 = {};
int i;
double4* a4 = (double4*)a;
double4* b4 = (double4*)b;
for(i=0; i<n/4; i++) {
sum4 += a4[i]*b4[i];
}
double dot = sum4[0] + sum4[1] + sum4[2] + sum4[3];
for(i*=4;i<n; i++) dot +=a[i]*b[i];
return dot;
}
If you compile this with FMA it will generate FMA3 instructions. I tested all these functions here (you can edit and compile the code yourself as well) http://coliru.stacked-crooked.com/a/273268902c76b116
Note that using SSE/AVX for a single dot production in matrix multiplication is not the optimal use of SIMD. You should do two (four) dot products at once with SSE (AVX) for double floating point.
I am not the best when it comes to compiling/writing makefiles.
I am trying to write a program that uses both GSL and OpenMP.
I have no problem using GSL and OpenMP separately, but I'm having issues using both. For instance, I can compile the GSL program
http://www.gnu.org/software/gsl/manual/html_node/An-Example-Program.html
By typing
$gcc -c Bessel.c
$gcc Bessel.o -lgsl -lgslcblas -lm
$./a.out
and it works.
I was also able to compile the program that uses OpenMP that I found here:
Starting a thread for each inner loop in OpenMP
In this case I typed
$gcc -fopenmp test_omp.c
$./a.out
And I got what I wanted (all 4 threads I have were used).
However, when I simply write a program that combines the two codes
#include <stdio.h>
#include <gsl/gsl_sf_bessel.h>
#include <omp.h>
int
main (void)
{
double x = 5.0;
double y = gsl_sf_bessel_J0 (x);
printf ("J0(%g) = %.18e\n", x, y);
int dimension = 4;
int i = 0;
int j = 0;
#pragma omp parallel private(i, j)
for (i =0; i < dimension; i++)
for (j = 0; j < dimension; j++)
printf("i=%d, jjj=%d, thread = %d\n", i, j, omp_get_thread_num());
return 0;
}
Then I try to compile to typing
$gcc -c Bessel_omp_test.c
$gcc Bessel_omp_test.o -fopenmp -lgsl -lgslcblas -lm
$./a.out
The GSL part works (The Bessel function is computed), but only one thread is used for the OpenMP part. I'm not sure what's wrong here...
You missed the worksharing directive for in your OpenMP part. It should be:
// Just in case GSL modifies the number of threads
omp_set_num_threads(omp_get_max_threads());
omp_set_dynamic(0);
#pragma omp parallel for private(i, j)
for (i =0; i < dimension; i++)
for (j = 0; j < dimension; j++)
printf("i=%d, jjj=%d, thread = %d\n", i, j, omp_get_thread_num());
Edit: To summarise the discussion in the comments below, the OP failed to supply -fopenmp during the compilation phase. That prevented GCC from recognising the OpenMP directives and thus no paralle code was generated.
IMHO, it's incorrect to declare the variables i and j as shared. Try declaring them private. Otherwise, each thread would get the same j and j++ would generate a race condition among threads.