OpenMP FFTW with Fortran not thread safe - parallel-processing

I am trying to use the FFTW with openMP and Fortran, but I get wrong results when executing in parallel, which also change their values every execution step, displaying typical behaviour when parallelisation goes wrong.
I am aiming for a simple 3d real-to-complex transformation. Following the FFTW tutorial, I took all but the call to dfftw_execute_dft_r2c() out of the parallel region, but it doesn't seem to work.
I use FFTW 3.3.8, configured with ./configure --enable-threads --enable-openmp --enable-mpi and compile my code with gfortran program.f03 -o program.o -I/usr/include -fopenmp -lfftw3_omp -lfftw3 -g -Wall.
This is how my program looks like:
program use_fftw
use,intrinsic :: iso_c_binding
use omp_lib
implicit none
include 'fftw3.f03'
integer, parameter :: dp=kind(1.0d0)
integer, parameter :: Nx = 10
integer, parameter :: Ny = 5
integer, parameter :: Nz = 5
real(dp), parameter :: pi = 3.1415926d0
real(dp), parameter :: physical_length_x = 20.d0
real(dp), parameter :: physical_length_y = 10.d0
real(dp), parameter :: physical_length_z = 10.d0
real(dp), parameter :: lambda1 = 0.5d0
real(dp), parameter :: lambda2 = 0.7d0
real(dp), parameter :: lambda3 = 0.9d0
real(dp), parameter :: dx = physical_length_x/real(Nx,dp)
real(dp), parameter :: dy = physical_length_y/real(Ny,dp)
real(dp), parameter :: dz = physical_length_z/real(Nz,dp)
integer :: void, nthreads
integer :: i, j, k
real(dp):: d
complex(dp), allocatable, dimension(:,:,:) :: arr_out
real(dp), allocatable, dimension(:,:,:) :: arr_in
integer*8 :: plan_forward
allocate(arr_in( 1:Nx, 1:Ny, 1:Nz)); arr_in = 0
allocate(arr_out(1:Nx/2+1, 1:Ny, 1:Nz)); arr_out = 0
!------------------------------
! Initialize fftw stuff
!------------------------------
! Call before any FFTW routine is called outside of parallel region
void = fftw_init_threads()
if (void==0) then
write(*,*) "Error in fftw_init_threads, quitting"
stop
endif
nthreads = omp_get_num_threads()
call fftw_plan_with_nthreads(nthreads)
! plan execution is thread-safe, but plan creation and destruction are not:
! you should create/destroy plans only from a single thread
call dfftw_plan_dft_r2c_3d(plan_forward, Nx, Ny, Nz, arr_in, arr_out, FFTW_ESTIMATE)
!--------------------------------
! Start parallel region
!--------------------------------
!$OMP PARALLEL PRIVATE( j, k, d)
! Fill array with wave
! NOTE: wave only depends on x so you can plot it later.
!$OMP DO
do i = 1, Nx
d = 2.0*pi*i*dx
do j = 1, Ny
do k = 1, Nz
arr_in(i,j,k) = cos(d/lambda1)+sin(d/lambda2)+cos(d/lambda3)
enddo
enddo
enddo
!$OMP END DO
call dfftw_execute_dft_r2c(plan_forward, arr_in, arr_out)
!$OMP END PARALLEL
!-----------------
! print results
!-----------------
do i=1, Nx/2+1
do j=1, Ny
do k=1, Nz
write(*,'(F12.6,A3,F12.6,A3)',advance='no') real(arr_out(i,j,k)), " , ", aimag(arr_out(i,j,k)), " ||"
enddo
write(*,*)
enddo
write(*,*)
enddo
deallocate(arr_in, arr_out)
! destroy plans is not thread-safe; do only with single
call dfftw_destroy_plan(plan_forward)
end program use_fftw
I also tried moving the initialisation part of FFTW (void = fftw_init_threads(); call fftw_plan_with_nthreads(nthreads); call dfftw_plan_dft_r2c_3d(...) into the parallel region, using a !$OMP SINGLE block and synchronising with a barrier afterwards, but the situation didn't improve.
Can anyone help me?
EDIT: I was able to test my program on another system, the problem remains. So the issue apparently isn't in my implementation of openmp or FFTW, but somewhere in the program itself.

You should normally call fftw execute routines outside of the parallel region. They have their own parallel regions inside them and they will take care of running the transform in parallel with that many threads as you requested during planning. They will re-use your existing OpenMP threads.
You can also call them inside a parallel region, but on different arrays, not on the same arrays! And then your plan should be planned to use 1 thread. Each thread would then preform a 2D transform of a slice of the array, for example.
The thread-safety means you can call the routines concurrently, but each for different data.

Related

OpenACC constant parameters

I am wondering what is the proper way to handle constants in OpenACC kernels.
For example, in the following code
module vecaddmod
implicit none
integer, parameter :: n = 100000
!$acc declare create(n)
contains
subroutine vecaddgpu(r, a, b)
real, dimension(:) :: r, a, b
integer :: i
!$acc update self(n)
!$acc data present(n)
!$acc kernels loop copyin(a(1:n),b(1:n)) copyout(r(1:n))
do i = 1, n
r(i) = a(i) + b(i)
enddo
!$acc end data
end subroutine vecaddgpu
end module vecaddmod
program main
use vecaddmod
implicit none
integer :: i, errs, argcount
real, dimension(:), allocatable :: a, b, r, e
character*10 :: arg1
allocate( a(n), b(n), r(n), e(n) )
do i = 1, n
a(i) = i
b(i) = 1000*i
enddo
! compute on the GPU
call vecaddgpu( r, a, b )
! compute on the host to compare
do i = 1, n
e(i) = a(i) + b(i)
enddo
! compare results
errs = 0
do i = 1, n
if( r(i) /= e(i) )then
errs = errs + 1
endif
enddo
print *, errs, ' errors found'
if( errs ) call exit(errs)
end program main
n is declared as a constant on CPU in a module, and it is used as the range in the loop. nvfortran warns me about Constant or Parameter used in data clause. Is the above example the proper way to handle this? Can I take advantage of the constant memory on GPU, such that I don't need to copy it from CPU to GPU for each kernel launch?
Thanks.
The compiler will replace parameters with the literal value so no need to put them in data regions.
module vecaddmod
implicit none
integer, parameter :: n = 100000
contains
subroutine vecaddgpu(r, a, b)
real, dimension(:) :: r, a, b
integer :: i
!$acc kernels loop copyin(a(1:n),b(1:n)) copyout(r(1:n))
do i = 1, n
r(i) = a(i) + b(i)
enddo
end subroutine vecaddgpu
end module vecaddmod
...
% nvfortran -acc -Minfo=accel test.f90
vecaddgpu:
11, Generating copyin(a(:100000)) << "n" is replaced with 100000
Generating copyout(r(:100000))
Generating copyin(b(:100000))
12, Loop is parallelizable
Generating Tesla code
12, !$acc loop gang, vector(128) ! blockidx%x threadidx%x

Compiling a fortran90 files with different parameters each time

I am recently working on a fortran90 program which calculate the time needed and result of some mathematics calculation. Here is the code:
program loops
use omp_lib
implicit none
integer, parameter :: N=729
integer, parameter :: reps=1000
real(kind=8), allocatable :: a(:,:), b(:,:), c(:)
integer :: jmax(N)
real(kind=8) :: start1,start2,end1,end2
integer :: r
allocate(a(N,N), b(N,N), c(N))
call init1()
start1 = omp_get_wtime()
do r = 1,reps
call loop1()
end do
end1 = omp_get_wtime()
call valid1();
print *, "Total time for ",reps," reps of loop 1 = ", end1-start1
call init2()
start2 = omp_get_wtime()
do r = 1,reps
call loop2()
end do
end2 = omp_get_wtime()
call valid2();
print *, "Total time for ",reps," reps of loop 2 = ", end2-start2
contains
subroutine init1()
implicit none
integer :: i,j
do i = 1,N
do j = 1,N
a(j,i) = 0.0
b(j,i) = 3.142*(i+j)
end do
end do
end subroutine init1
subroutine init2()
implicit none
integer :: i,j,expr
do i = 1,N
expr = mod(i,3*(i/30)+1)
if (expr == 0) then
jmax(i) = N
else
jmax(i) = 1
end if
c(i) = 0.0
end do
do i = 1,N
do j = 1,N
b(j,i) = dble(i*j+1)/dble(N*N)
end do
end do
end subroutine init2
subroutine loop1()
implicit none
integer :: i,j
!$OMP PARALLEL DO DEFAULT(NONE), PRIVATE(i,j), SHARED(a,b), SCHEDULE(type,chunksize)
do i = 1,N
do j = N,i,-1
a(j,i) = a(j,i) + cos(b(j,i))
end do
end do
!$OMP END PARALLEL DO
end subroutine loop1
subroutine loop2()
implicit none
integer :: i,j,k
real (kind=8) :: rN2
rN2 = 1.0 / dble (N*N)
!$OMP PARALLEL DO DEFAULT(NONE), PRIVATE(i,j,k), SHARED(rN2,c,b,jmax), SCHEDULE(type,chunksize)
do i = 1,N
do j = 1, jmax(i)
do k = 1,j
c(i) = c(i) + k * log(b(j,i)) *rN2
end do
end do
end do
!$OMP END PARALLEL DO
end subroutine loop2
subroutine valid1()
implicit none
integer :: i,j
real (kind=8) :: suma
suma= 0.0
do i = 1,N
do j = 1,N
suma = suma + a(j,i)
end do
end do
print *, "Loop 1 check: Sum of a is ", suma
end subroutine valid1
subroutine valid2()
implicit none
integer i
real (kind=8) sumc
sumc= 0.0
do i = 1,N
sumc = sumc + c(i)
end do
print *, "Loop 2 check: Sum of c is ", sumc
end subroutine valid2
end program loops
In the line !$OMP PARALLEL DO DEFAULT(NONE), PRIVATE(i,j), SHARED(a,b), SCHEDULE(type,chunksize) and !$OMP PARALLEL DO DEFAULT(NONE), PRIVATE(i,j,k), SHARED(rN2,c,b,jmax), SCHEDULE(type,chunksize).
As I want to perform the task of different schedule case to see the different results, so I need to change this part SCHEDULE(type,chunksize), with different schedule type and different chunksize. For example, in this case, the schedule type is static and chunksize is 1.
Say if I have type of (static, a, b, c) and chunksize (1,2,3,4,5,6,7). As I am new to fortran so I wonder is it possible to compile and run the code for all case in once without the fact that I have to change the parameters manually everytime, i.e it compiles and runs to give the result of first case e.g (static,1), it then compiles and runs the file again but with the parameters changed automatically that gives another result. For instance, (static,2)...(b,4) etc.
I heard that we can create a script file to perform such task, but I not am sure what exactly I need to do for this.
Thank you so much.
You may want to investigate the use of the preprocessor. I'm speaking from experience with gfortran, but I believe this applies (almost) all other compilers as well even though it is outside the scope of the Fortran standard.
If you name your source file with a capital F in the suffix, i.e. file.F, file.F90, file.F95 etc, your file will be preprocessed with the C preprocessor before being compiled. That may sound complicated, but cutting this down to what you need, this means that if you compile your code with a command like
$ gfortran -DCHUNK_SIZE=1 mySource.F90
then all occurrences of CHUNK_SIZE (with qualifiers which are not essential to your problem) will be replaced by 1. More technically, CHUNK_SIZE becomes a macro defined to expand to 1. So if you replace SCHEDULE(type,chunksize) with SCHEDULE(type,CHUNK_SIZE) in your source file, you can repeatedly invoke the compiler with different values, -DCHUNK_SIZE=1, -DCHUNK_SIZE=2 etc, and get the result that you described. The same can be done for type.
Now you may want to change the function names accordingly as well. One way would be to add a few preprocessor statements near the top of your file declaring a few macros, namely
#ifdef __GFORTRAN__
#define PASTE2(a,b) a/**/b
#define FUNC_NAME_WITH_CHUNK_SIZE(fn) PASTE2(PASTE2(fn,_),CHUNK_SIZE)
#else
#define FUNC_NAME_WITH_CHUNK_SIZE(fn) fn ## _ ## CHUNK_SIZE
#endif
#define LOOP1 FUNC_NAME_WITH_CHUNK_SIZE(loop1)
#define LOOP2 FUNC_NAME_WITH_CHUNK_SIZE(loop2)
and replace loop1 with LOOP1 etc. You could do this from the command line as before, but since these rules are not supposed to change between compilations, it makes sense to keep these in the source file. I think the only part that is not self-explanatory is the use of ## and /**/ between #ifdef and #endif. This is how one does string concatenation with the preprocessor, and because gfortran uses the way C preprocessors did it before the language was standardized, it gets exceptional treatment, see e.g. this answer for some info on these operators. The purpose of this operation is to replace LOOP1 with loop1_<CHUNK_SIZE>, where <CHUNK_SIZE> is filled in from the command line. Feel free to follow any other conventions for naming these functions.
If you want to call these functions from another translation unit, you will have to process the function names in the same way, of course. In order to make your life easier, you may want to research the #include statement. Detailing this would take us too far here, but the idea is that you put all your includes into a file (conventionally named <something>.inc in the Fortran-world with <something> replaced that makes sense to you) and use #include "<something>.inc in all source files to obtain the same macro definitions.

Why does a large matrix pass through several subroutine tasks as fast as a smaller matrix

What Exactly is happening to my matrix? how is Fortran handling it?
What's attached is a snippet of code inspired from a larger
project that simulates light transport in eye tissue. It
passes some large matrices through subroutines and then
randomly puts values in them.
My Goal: To see how passing such a large matrix through
several subroutines would have an impact on
performance.
My Reference: is the exact same code except the dimension of the matrix of interest is now [5,5] ( it was previously [250,200] )
My Question: Why is there no significant difference in results?
MY RESULTS
MATRIX A_rz dimension [250,200]
real 0m6.661s
user 0m6.638s
sys 0m0.012s
MATRIX A_rz dimension [5,5]
real 0m6.508s
user 0m6.489s
sys 0m0.011s
**bMatMOD.f90
module bMatMOD
implicit none
type :: INPUT
integer :: nLayers = 1
integer :: nPhotons = 50000000
real, dimension (2) :: dZR = (/0.0004, 0.001/)
integer, dimension(3) :: nZRA = (/250,200,30/)
real, dimension (1) :: d = (/0.03/)
end type INPUT
type :: OUTPUT
real, allocatable :: Rd_ra(:,:)
real, allocatable :: A_rz(:,:)
real, allocatable :: Tt_ra(:,:)
end type OUTPUT
contains
subroutine initOUTPUTS (in_INPUT,out_OUTPUT)
type (INPUT), intent (in) :: in_INPUT
type (OUTPUT),intent (out) :: out_OUTPUT
allocate (out_OUTPUT%A_rz(in_INPUT%nZRA(2),in_INPUT%nZRA(1)))
allocate (out_OUTPUT%Rd_ra(in_INPUT%nZRA(2),in_INPUT%nZRA(3)))
allocate (out_OUTPUT%Tt_ra(in_INPUT%nZRA(2),in_INPUT%nZRA(3)))
out_OUTPUT%A_rz = 0.0
out_OUTPUT%Rd_ra = 0.0
out_OUTPUT%Tt_ra = 0.0
return
end subroutine initOUTPUTS
end module bMatMOD
**bMatRoutines.f90
subroutine A (o)
use bMatMOD
type (OUTPUT) :: o
real :: rnd1, rnd2
rnd1 = rand()
rnd2 = rand()
call B(o,rnd1,rnd2)
return
end subroutine A
subroutine B (o,x,y)
use bMatMOD
type (OUTPUT) :: o
real, intent (in) :: x
real, intent (in) :: y
integer, dimension(2) :: temp
integer :: i, j
temp = SHAPE(o%A_rz)
i = INT(temp(1)*y)
j = INT(temp(2)*x)
if ( i .eq. 0) then
i = 1
endif
if (i .eq. temp(1)) then
i = i - 1
endif
if (j .eq. 0) then
j = 1
endif
if (j .eq. temp(2)) then
j = j - 1
endif
o%A_rz(i,j) = o%A_rz(i,j) + x + y
return
end subroutine B
**bMatmcml.f90
program bMatmcml
use bMatMOD
implicit none
type (INPUT) :: u
type (OUTPUT) :: o
integer :: i
call initOUTPUTS(u,o)
call srand(0)
do i = 1,u%nPhotons,1
call A(o)
enddo
end program bMatmcml
**bMat.sh
rm -f *.o *~ *.exe
echo "MATRIX A_rz dimension [250,200]"
gfortran bMatMOD.f90 bMatRoutines.f90 bMatmcml.f90 -g -Wall -Werror -O3 -ffast-math -o bMat.exe
time ./bMat.exe
echo "MATRIX A_rz dimension [5,5]"
gfortran bMatMOD-v1.f90 bMatRoutines.f90 bMatmcml.f90 -g -Wall -Werror -O3 -ffast-math -o bMat-v1.exe
time ./bMat.exe

What will be the proper assigning of variables (private and shared) in the parallelized do loop of the given subroutine GAUSSLEG?

I am new about openmp. I am trying to parallelize do loop in subroutine GAUSSLEG. Variables Xg, Wg and Ng are taken from module matric. I am getting the unexpected results. I am confused about proper assigning of variables(private and shared). Can somebody help me ?
SUBROUTINE GAUSSLEG(f,a,b,s)
USE OMP_LIB
USE MATRIC , ONLY : XG ,WG , NG
IMPLICIT DOUBLE PRECISION(A-H,O-Z)
external f
xm = 0.5d0*(b+a)
xl = 0.5d0*(b-a)
s = 0.d0
!$omp parallel do reduction ( + : s) default(none)
!$omp private(j) shared(xm,xl,wg,xg,ng,dx)
do j=1,ng
dx = xl*xg(j)
s = s + wg(j)*(func(xm+dx)+func(xm-dx))
end do
!$omp end parallel do
s = xl*s/2.0
return
END
Hi, I have used the subroutine gaussleg to calculate the integration of sin(x) from 0 to pi, I get the same result (2.5464790894) whether i make dx private or shared but the exact result is 2.0. I have also tried by putting xl*xg(j) directly and removing dx, still getting same result as above.Without -openmp option in the compilation, i get the exact result 2.0.This is whole program.
MODULE MATRIC
IMPLICIT NONE
INTEGER , PARAMETER :: NG = 40
DOUBLE PRECISION , PARAMETER :: PI=2.0D0*ACOS(0.0D0)
DOUBLE PRECISION :: XG(60) , WG(60)
END MODULE MATRIC
program gauss
use matric, only : xg,wg,pi
implicit none
double precision :: x1,x2,a,b,ans
external :: f
x1 = -1.0d0 ; x2 = 1.0d0
a = 0.0 ; b = PI
call gauleg(x1,x2)
call gaussleg(f,a,b,ans)
write(*,*)ans
end program gauss
!function to be integrated
double precision function f(x)
implicit none
double precision, intent(in) :: x
f = sin(x)
end function f
SUBROUTINE GAUSSLEG(func,a,b,ss)
USE OMP_LIB
USE MATRIC , ONLY : XG ,WG , NG
double precision,intent(in) :: a , b
double precision,intent(out)::ss
double precision :: xm , xl , dx
integer :: j
double precision,external::func
xm = 0.5d0*(b+a)
xl = 0.5d0*(b-a)
ss = 0.d0
!$OMP PARALLEL DO REDUCTION( + : ss) default(none) &
!$OMP PRIVATE(j,dx) SHARED(xm,xl,xg,wg)
do j=1,ng
dx = xl*xg(j)
ss = ss + wg(j)*(func(xm+dx)+func(xm-dx))
end do
!$OMP END PARALLEL DO
ss = xl*ss/2.0
return
END
Your code includes a canonical data race. You have declared dx shared, then written
dx = xl*xg(j)
so that all threads can update the same, shared, variable, without any co-ordination. I think, but it is your responsibility to check this, that you can make dx private and have each thread look after its own value of the variable.
Incidentally. DO NOT USE implicit typing, you're just asking for trouble. Asking for trouble while you are trying to learn how to use OpenMP is just, well, asking for more trouble. USE implicit none. And don't respond Oh, I'm just updating an existing codebase which uses implicit typing. If that's what you are doing, do it properly.
Got exact results in the following way.
SUBROUTINE QGAUSSP(func,a,b,ss)
USE OMP_LIB
USE MATRIC , ONLY : XG ,WG , NG
implicit none
double precision, intent(in) :: a , b
double precision, intent(out):: ss
double precision :: xm , xl , dx , xgd , wgd
double precision :: s(NG)
integer :: j,tid
double precision,external::func
xm = 0.5d0*(b+a)
xl = 0.5d0*(b-a)
ss = 0.d0
!$omp parallel do private(j,xgd,wgd,dx) shared(xm,xl,xg,wg,s) num_threads(15)
do j=1,ng
xgd=xg(j)
wgd=wg(j)
dx = xl*xgd
s(j)=wgd*(func(xm+dx)+func(xm-dx))
end do
!$omp end parallel do
ss=sum(s) *xl/2.0
return
END

Writing a large matrix in a single file using MPI

I have a large N by N matrix containing real numbers, which has been decomposed into blocks using MPI. I am now trying to recompose this matrix and write it in a single file.
This topic (writing a matrix into a single txt file with mpi) covered a similar issue, but I got pretty confused by all the 'integer-to-string' conversion, etc (I am not an expert!). I am using Fortran for my code, but I guess that even a C explanation should help. I have been reading tutorials on MPI-IO, but there are still a few things I do not understand. Here is the code I have been working on:
use mpi
implicit none
! matrix dimensions
integer, parameter :: imax = 200
integer, parameter :: jmax = 100
! domain decomposition in each direction
integer, parameter :: iprocs = 3
integer, parameter :: jprocs = 3
! variables
integer :: i, j
integer, dimension(mpi_status_size) :: wstatus
integer :: ierr, proc_num, numprocs, fileno, localarray
integer :: loc_i, loc_j, ppp
integer :: istart, iend, jstart, jend
real, dimension(:,:), allocatable :: x
! initialize MPI
call mpi_init(ierr)
call mpi_comm_size(mpi_comm_world, numprocs, ierr)
call mpi_comm_rank(mpi_comm_world, proc_num, ierr)
! define the beginning and end of blocks
loc_j = proc_num/iprocs
loc_i = proc_num-loc_j*iprocs
ppp = (imax+iprocs-1)/iprocs
istart = loc_i*ppp + 1
iend = min((loc_i+1)*ppp, imax)
ppp = (jmax+jprocs-1)/jprocs
jstart = loc_j*ppp + 1
jend = min((loc_j+1)*ppp, jmax)
! write random data in each block
allocate(x(istart:iend,jstart:jend))
do j = jstart, jend
do i = istart, iend
x(i,j) = real(i + j)
enddo
enddo
! create subarrays
call mpi_type_create_subarray( 2, [imax,jmax], [iend-istart+1,jend-jstart+1], &
[istart,jstart], mpi_order_fortran, mpi_real, localarray, ierr )
call mpi_type_commit( localarray, ierr )
! write to file
call mpi_file_open( mpi_comm_world, 'test.dat', IOR(MPI_mode_create,MPI_mode_wronly), &
mpi_info_null, fileno, ierr )
call mpi_file_set_view( fileno, 0, mpi_real, localarray, "native", mpi_info_null, ierr )
call mpi_file_write_all( fileno, x, (jend-jstart+1)*(iend-istart+1), MPI_real, wstatus, ierr )
call mpi_file_close( fileno, ierr )
! deallocate data
deallocate(x)
! finalize MPI
call mpi_finalize(ierr)
I have been following this tutorial (PDF), but my compiler complains that there is no specific subroutine for the generic mpi_file_set_view. Did I do something wrong? Is the rest of the code ok?
Thank you very much for your help!!
Joachim
I would say that the easy way is to use a library designed to perform such operations efficiently : http://2decomp.org/mpiio.html
You can also look at their source code (files io.f90 and io_write_one.f90).
In the source code, you will see a call to MPI_FILE_SET_SIZE that may be relevant for your case.
EDIT : consider using "call MPI_File_Set_View(fhandle, 0_MPI_OFFSET_KIND,...". Answer from MPI-IO: MPI_File_Set_View vs. MPI_File_Seek

Resources