I am testing FFTW in a fortran program, because I need to use it. Since I am working with huge matrixes, my first solution is to use OpenMP. When my matrix has dimension 500 x 500 x 500, the following error happens:
Operating system error:
Program aborted. Backtrace:
Cannot allocate memory
Allocation would exceed memory limit
I compiled the code using the following: gfortran -o test teste_fftw_openmp.f90 -I/usr/local/include -L/usr/lib/x86_64-linux-gnu -lfftw3_omp -lfftw3 -lm -fopenmp
PROGRAM test_fftw
USE omp_lib
USE, intrinsic:: iso_c_binding
IMPLICIT NONE
INCLUDE 'fftw3.f'
INTEGER::i, DD=500
DOUBLE COMPLEX:: OUTPUT_FFTW(3,3,3)
DOUBLE COMPLEX, ALLOCATABLE:: A3D(:,:,:), FINAL_OUTPUT(:,:,:)
integer*8:: plan
integer::iret, nthreads
INTEGER:: indiceX, indiceY, indiceZ, window=2
!! TESTING 3D FFTW with OPENMP
ALLOCATE(A3D(DD,DD,DD))
ALLOCATE(FINAL_OUTPUT(DD-2,DD-2,DD-2))
write(*,*) '---------------'
write(*,*) '------------TEST 3D FFTW WITH OPENMP----------'
A3D = reshape((/(i, i=1,DD*DD*DD)/),shape(A3D))
CALL dfftw_init_threads(iret)
CALL dfftw_plan_with_nthreads(nthreads)
CALL dfftw_plan_dft_3d(plan, 3,3,3, OUTPUT_FFTW, OUTPUT_FFTW, FFTW_FORWARD, FFTW_ESTIMATE)
FINAL_OUTPUT=0.
!$OMP PARALLEL DO DEFAULT(SHARED) SHARED(A3D,plan,window) &
!$OMP PRIVATE(indiceX, indiceY, indiceZ, OUTPUT_FFTW, FINAL_OUTPUT)
DO indiceZ=1,10!500-window
write(*,*) 'INDICE Z=', indiceZ
DO indiceY=1,10!500-window
DO indiceX=1,10!500-window
CALL dfftw_execute_dft(plan, A3D(indiceX:indiceX+window,indiceY:indiceY+window, indiceZ:indiceZ+window), OUTPUT_FFTW)
FINAL_OUTPUT(indiceX,indiceY,indiceZ)=SUM(ABS(OUTPUT_FFTW))
ENDDO
ENDDO
ENDDO
!$OMP END PARALLEL DO
call dfftw_destroy_plan(plan)
CALL dfftw_cleanup_threads()
DEALLOCATE(A3D,FINAL_OUTPUT)
END PROGRAM test_fftw
Notice this error occurs when I just use a huge matrix(A3D) without running the loop in all the values of this matrix (for running in all values, I should have the limits of the three (nested) loops as 500-window.
I tried to solve this(tips here and here) with -mcmodel=medium in the compilation without success.
I had success when I compiled with gfortran -o test teste_fftw_openmp.f90 -I/usr/local/include -L/usr/lib/x86_64-linux-gnu -lfftw3_omp -lfftw3 -lm -fopenmp -fmax-stack-var-size=65536
So, I don't understand:
1) Why there is memory allocation problem, if the huge matrix is a shared variable?
2) The solution I found is going to work if I have more huge matrix variables? For example, 3 more matrixes 500 x 500 x 500 to store calculation results.
3) In the tips I found, people said that using allocatable arrays/matrixes would solve, but I was using without any difference. Is there anything else I need to do for this?
Two double complex arrays with 500 x 500 x 500 elements require 4 gigabytes of memory. It is likely that the amount of available memory in your computer is not sufficient.
If you only work with small windows, you might consider not using the whole array at the whole time, but only parts of it. Or distribute the computation across multiple computers using MPI.
Or just use a computer with bigger RAM.
Related
I want to distribute subroutines to different tasks with OpenMP.
In my code I implemented this:
!$omp parallel
!$omp single
do thread = 1, omp_get_num_threads()
!$omp task
write(*,*) "Task,", thread, "is computing"
call find_pairs(me, thread, points)
call count_neighbors(me, thread, neighbors(:, thread))
!$omp end task
end do
!$omp end single
!$omp end parallel
The subroutines find_neighbors and count_neighbors do some calculations.
I set the number of threads in my program before with:
nr_threads = 4
call omp_set_num_threads(nr_threads)
Compiling this with GNU Fortran (Ubuntu 8.3.0-6ubuntu1) 8.3.0 and running,
gives me only one thread, running at nearly 100% when monitoring with top. Nevertheless, it prints the right
Task, 1 is computing
Task, 2 is computing
Task, 3 is computing
Task, 4 is computing
I compile it using:
gfortran -fopenmp main.f90 -o program
What I want is to distribute different calls of the subroutines according to
the number of OpenMP threads, working in parallel.
From what I understand is, that a single thread is created which creates the different
tasks.
I have a simple nbody implementation code and try to compile it for launching on NVIDIA GPUs (Tesla K20m/Geforce GTX 650 Ti). I use the following compiler options:
-Minfo=all -acc -Minline -Mfpapprox -ta=tesla:cc35/nvidia
Everything works without -Mfpapprox, but when I use it, the compilation fails with the following output:
346, Accelerator restriction: unsupported operation: RSQRTSS
The 346 line writes as:
float rdistance=1.0f/sqrtf(drSquared);
where
float drSquared=dx*dx+dy*dy+dz*dz+softening;
and dx, dy, dz are float values. This line is inside the #pragma acc parallel loop independent for() construction.
What is the problem with -Mfpapprox?
-Mfpapprox tells the compiler to use very low-precision CPU instructions to approximate DIV or SQRT. These instructions are not supported on the GPU. The GPU SQRT is both fast and precise so no need for a low-precision version.
Actually even on the CPU, I'd recommend you not use -Mfpapprox unless you really understand the mathematics of your code and it can handle a high degree of imprecision (as much as 5-6 bits or ~20Ulps off). We added this flag about 10 years ago since at the time the CPUs divide operation was very expensive. However, CPU performance for divide has greatly improved since then (as has sqrt) so you're generally better off not sacrificing precision for the little bit of speed-up you might get from this flag.
I'll put in an issue report requesting that the compiler ignore -Mfpapprox for GPU code so you wont see this error.
In trying to optimise some code I find that using OpenMP linearly increases the time it takes to run. The representative section of code that I am trying to speed up is as follow:
CALL system_clock(count_rate=cr)
CALL system_clock(count_max=cm)
rate = REAL(cr)
CALL SYSTEM_CLOCK(c1)
DO k=1,ntotal
CALL OMP_INIT_LOCK(locks(k))
END DO
!$OMP PARALLEL DO DEFAULT(SHARED) PRIVATE(i,j,k)
DO k=1,niac
i = pair_i(k)
j = pair_j(k)
dvx(:,k) = vx(:,i)-vx(:,j)
CALL omp_set_lock(locks(i))
CALL DGER(dim,dim,-1.d0, (disp_nmh(:,j)-disp_nmh(:,i)),1, &
(dwdx_nor(dim+1:2*dim,k)*V_0(j)),1, particle_data(i)%def_grad,dim)
CALL DGER(dim,dim,-1.d0, (-dvx(:,k)),1, &
(dwdx_nor(dim+1:2*dim,k)*V_0(j)) ,1, particle_data(i)%vel_grad(1:dim,1:dim),dim)
CALL omp_unset_lock(locks(i))
CALL omp_set_lock(locks(j))
CALL DGER(dim,dim,-1.d0, (dvx(:,k)),1, &
(dwdx_nor(3*dim+1:4*dim,k)*V_0(i)) ,1, particle_data(j)%vel_grad(1:dim,1:dim),dim)
CALL DGER(dim,dim,-1.d0, (disp_nmh(:,i)-disp_nmh(:,j)),1, &
(dwdx_nor(3*dim+1:4*dim,k)*V_0(i)),1, particle_data(j)%def_grad,dim)
CALL omp_unset_lock(locks(j))
END DO
!$OMP END PARALLEL DO
CALL SYSTEM_CLOCK(c2)
t_el = t_el + (c2-c1)/rate
WRITE(*,*) "Wall time elapsed: ", t_el
Note that for the simulation I am testing k=14000 which I thought was a reasonable candidate for running in parallel. So far as I know I have to use the locks to ensure that threads which are given the same value of "i" (but a different value of "j") cannot access the same index of the arrays which are being written to at the same time. I cannot figure out if the version of BLAS (sudo apt-get install libblas-dev liblapack-dev) which I use is thread safe. I ran a simulation with 8 cores and got the same result as without OpenMP so I am guessing that it could be. BLAS is used, in this case, to calculate and sum the outer product of many 3x3 matrices.
Is the implementation of OpenMP above the best way to speed up this code? I know very little about OpenMP but my guesses are that:
the memory being all over the place ("i" is sequential but "j" is not)
the overhead in starting and closing down all the threads
the constant locking and unlocking
and maybe the small loop size (although I thought 14000 would be sufficient)
are significantly outweighing the performance benefits. Is this correct? Or can the code above be modified to get some performance gain?
EDIT
I should probably add that the code above is part of a time integration loop. Hopefully this explains why the elapsed time is summed.
I've implemented the code bellow that generate vectors of random number using the MKL VSL library:
! ifort -mkl test1.f90 -cpp -openmp
include "mkl_vsl.f90"
#define ITERATION 1000000
#define LENGH 10000
program test
use mkl_vsl_type
use mkl_vsl
use mkl_service
use omp_lib
implicit none
integer i,brng, method, seed, dm,n,errcode
real(kind=8) r(LENGH) , s
real(kind=8) a, b, start,endd
TYPE (VSL_STREAM_STATE) :: stream
integer(4) :: nt
! *****
brng = VSL_BRNG_SOBOL
method = VSL_RNG_METHOD_UNIFORM_STD
seed = 777
a = 0.0
b = 1.0
s = 0.0
!call omp_set_num_threads(4)
call omp_set_dynamic(0)
nt = omp_get_max_threads()
! *****
print *,'max OMP threads number',nt
if (1 == omp_get_dynamic()) then
print '(" Intel OMP may use less than "I0" threads for a large problem")', nt
else
print '(" Intel OMP should use "I0" threads for a large problem")', nt
end if
if (1 == omp_get_max_threads()) print *, "Intel MKL does not employ threading"
!call mkl_set_num_threads(4)
call mkl_set_dynamic(0)
nt = mkl_get_max_threads()
print *,'max MKL threads number',nt
if (1 == mkl_get_dynamic()) then
print '(" Intel MKL may use less than "I0" threads for a large problem")', nt
else
print '(" Intel MKL should use "I0" threads for a large problem")', nt
end if
if (1 == mkl_get_max_threads()) print *, "Intel MKL does not employ threading"
! ***** Initialize *****
errcode=vslnewstream( stream, brng, seed )
! ***** Call RNG *****
start=omp_get_wtime()
do i=1,ITERATION
errcode=vdrnguniform( method, stream, LENGH, r, a, b )
s = s + sum(r)/LENGH
end do
endd=omp_get_wtime()
! ***** DEleting the stream *****
errcode=vsldeletestream(stream)
! *****
print *, s/ITERATION, endd-start
end program test
I don't see any speedup when using 4 and 32 threads for instance.
I use the Intel compiler version 13.1.3 and compile doing
ifort -mkl test1.f90 -cpp -openmp
It's like the random numbers are not generated in parallel.
Any hints here?
Thank you,
Éric.
Your code doesn't contain any OpenMP directives to actually parallelise the work, when it executes it runs only 1 thread. It is not sufficient to use omp_lib and to scatter a few calls to functions such as omp_get_wtime around, you actually have to insert some worksharing directives.
If I run your code, as is, my performance monitor shows that only one thread is active, and your code reports
max OMP threads number 16
Intel OMP should use 16 threads for a large problem
max MKL threads number 16
Intel MKL should use 16 threads for a large problem
0.499972674509302 11.2807227574035
If I simply wrap the loop in an OpenMP worksharing directive, like this
!$omp parallel do
do i=1,ITERATION
errcode=vdrnguniform( method, stream, LENGH, r, a, b )
s = s + sum(r)/LENGH
end do
!$omp end parallel do
then the performance monitor on my dual-quad-core-with-hyperthreading-PC shows that 16 threads are active and your program reports
max OMP threads number 16
Intel OMP should use 16 threads for a large problem
max MKL threads number 16
Intel MKL should use 16 threads for a large problem
0.380979220384302 7.17352125150956
I guess the hint I would offer is: study your favourite OpenMP tutorial, in particular the sections covering the parallel and do directives. I offer no warranty that the simple modification I have made does not break your program; in particular I don't guarantee that I haven't introduced a race condition.
I leave you the exercise of determining whether the speed-up on going from 1 to 16 (hyper-)threads is acceptable and any analysis of why it appears to be so modest.
I have a very simple example of a strange segfault I am having and it is as follows:
program big_array_segfault
integer :: nX = 13000
integer :: nY = 100000
real(kind = 8), allocatable :: bigarr(:,:)
allocate(bigarr(nX, nY))
end program big_array_segfault
Note that I have 20 GB of RAM to work with and this does not even begin to approach that. Everything I have seen online suggests that this may be a problem with Stack space vs Heap space but I don't know how to control the memory in that way using Fortran.
For what it is worth, the I am compiling with gfortran -o big_arr.exe test.f90 so there is nothing special going on in the compilation.