Related
I am using LAPACK to perform some eigendecomposition on two relatively small matrices (15x15, in this case). These matrices are from an optimisation problem, but their origin is not of great importance. They are both quite similar as it is a small step in an optimisation algorithm, so I would expect their eigenvectors and eigenvalues to be of similar magnitude, and certainly the same sign. However, after obtaining the eigenvectors of both matrices using LAPACK's DSYEVD (I have also tried DSYEV with the same result), I find that some eigenvectors show expected similarity, but others have total inversion of their signs which is rather strange. WeTransfer link for these matrices: https://wetransfer.com/downloads/784e73883eae7c63aa06aa70b64d462b20220831121100/b7519a.
I am using a module for maths that contains two functions, one for eigenvalues and one for eigenvectors.
MODULE math
implicit none
contains
function EVALS(matrix, n) result(eigenvals)
! Here, the eigenvalues from a square, symmetric, real matrix are calculated.
! The eigenvectors associated with said eigenvalues are not given as a result, but can be obtained with the function EVEVS.
!
! ARGUMENTS: matrix : 2D array containing the matrix which the eigenvalues of will be calculated.
! n : integer which represents the number of rows/columns (it doesn't matter which as the matrix is square).
implicit none
integer(i4b), intent(in) :: n
integer(i4b) :: LDA, LWORK, LIWORK, INFO
real(dp), intent(in) :: matrix(n, n)
real(dp), allocatable :: WORK(:), IWORK(:)
real(dp) :: eigenvals(n), work_mat(n,n)
character :: JOBZ, UPLO
! Assigning working matrix...
work_mat(:,:) = 0.0
work_mat(:,:) = matrix(:,:)
! Initialising some values....
JOBZ = 'N'
UPLO = 'U'
LDA = n
INFO = 0
! Allocating working array...
LWORK = MAX(1, (1 + 6*n + 2*n**2))
LIWORK = MAX(1, (3 + 5*n))
allocate(WORK(LWORK))
allocate(IWORK(LIWORK))
WORK(:) = 0
IWORK(:) = 0
! Obtaining eigenvalues...
eigenvals(:) = 0.0
call DSYEVD(JOBZ, UPLO, n, work_mat, LDA, eigenvals, WORK, LWORK, IWORK, LIWORK, INFO)
deallocate(WORK)
end function EVALS
function EVECS(matrix, n) result(eigenvecs)
! Here, the eigenvectors of a square, symmetric, real matrix are calculated.
! The eigenvalues associated with said eigenvectors are not given as a result, but can be obtained with the function EVALS.
!
! ARGUMENTS: matrix : 2D array containing the matrix which the eigenvalues of will be calculated.
! n : integer which represents the number of rows/columns (it doesn't matter which as the matrix is square).
implicit none
integer(i4b), intent(in) :: n
integer(i4b) :: LDA, LWORK, LIWORK, INFO
real(dp), intent(in) :: matrix(n, n)
real(dp), allocatable :: WORK(:), IWORK(:)
real(dp) :: eigenvecs(n, n), eigenvals(n), work_mat(n,n)
character :: JOBZ, UPLO
! Assigning working matrix...
work_mat(:,:) = 0.0
work_mat(:,:) = matrix(:,:)
! Initialising some values....
JOBZ = 'V'
UPLO = 'U'
LDA = n
INFO = 0
! Allocating working array....
LWORK = MAX(1, (1 + 6*n + 2*n**2))
LIWORK = MAX(1, (3 + 5*n))
allocate(WORK(LWORK))
allocate(IWORK(LIWORK))
WORK(:) = 0
IWORK(:) = 0
! Obtaining eigenvectors...
eigenvals(:) = 0.0
call DSYEVD(JOBZ, UPLO, n, work_mat, LDA, eigenvals, WORK, LWORK, IWORK, LIWORK, INFO)
eigenvecs(:,:) = 0.0
eigenvecs(:,:) = work_mat(:,:)
deallocate(WORK)
end function EVECS
END MODULE math
Now, below is an minimal example of how these functions are used within another source code file. I have printed two eigenvectors (one from each matrix) which are very similar in their values, but the signs are opposite. If you were to look at other corresponding eigenvectors, some are more or less similar (as expected), and others show this same opposite signs behaviour.
program eigen_test
use math
implicit none
integer(i4b) :: i, j
integer(i4b), parameter :: npr=15
real(dp) :: mat_a(npr,npr), mat_b(npr,npr)
real(dp) :: eigenvals_a(npr), eigenvals_b(npr)
real(dp) :: eigenvecs_a(npr,npr), eigenvecs_b(npr,npr)
open(10, file="mat_a")
read(10,*) ((mat_a(i,j), j=1,npr), i=1,npr)
open(11, file="mat_b")
read(11,*) ((mat_b(i,j), j=1,npr), i=1,npr)
eigenvals_a = EVALS(mat_a, npr)
eigenvecs_a = EVECS(mat_a, npr)
eigenvals_b = EVALS(mat_b, npr)
eigenvecs_b = EVECS(mat_b, npr)
print *, eigenvecs_a(:,4)
print *, eigenvecs_b(:,4)
end program eigen_test
Perhaps this is a problem with my eigendecomposition understanding, or maybe I am not using the LAPACK function correctly, but I hope the problem is clear and reproducible.
Thanks in advance!
I am evaluating the overhead cost (in wall clock time) of some features in fortran programs. And I came across the following behavior with GNU fortran, that I did not expect: having the subroutine in the same file as the main program (in the contain region or in a module) versus having the subroutine in a separate module (in separate file) has a big impact.
The simple code that reproduces the behavior is:
I have a subroutine that does a matrix-vector multiplication 250000 times. In the first test, I have a subroutine in the contain region of the main program. In the second test, the same subroutine is in a separate module.
The difference in performance between the two is big.
The subroutine in the contain region of the main program, 10 runs yields
min: 1.249
avg: 1.266
1.275 - 1.249 - 1.264 - 1.279 - 1.266 - 1.253 - 1.271 - 1.251 - 1.269 - 1.284
The subroutine in separate module, 10 runs yields
min: 1.848
avg: 1.861
1.848 - 1.862 - 1.853 - 1.871 - 1.854 - 1.883 - 1.810 - 1.860 - 1.886 - 1.884
About 50% slower, this factor seems consistent with the size of the matrix as
well as the number of iterations.
those tests are done with gfortran 4.8.5. With gfortran 8.3.0, the program runs a little faster, but the time doubles from the subroutine in the contain section of the main program to the subroutine in a separate module.
Portland group does not have that problem with my test program and it run even faster than the best case of gfortran.
If I read the size of the matrix from an input file (or runtime command line arg) and do dynamic allocation, then the difference in wall clock time goes away and both cases run slower (wall clock time of the subroutine in the separate module, separate file). I suspect that gfortran is able to optimize the main program better if the size of the matrix is known at compile time in the main program.
What am I doing wrong that GNU Compilers do not like, or what is GNU compiler doing poorly? Are there compiling flags to to help gfortran in such cases?
Everything is compiled with optimization -O3
Code (test_simple.f90)
!< #file test_simple.f90
!! simple test
!>
!
program test_simple
!
use iso_fortran_env
use test_mod
!
implicit none
!
integer, parameter :: N = 100
integer, parameter :: N_TEST = 250000
logical, parameter :: GENERATE=.false.
!
real(real64), parameter :: dx = 10.0_real64
real(real64), parameter :: lx = 40.0_real64
!
real(real64), dimension(N,N) :: A
real(real64), dimension(N) :: x, y
real(real64) :: start_time, end_time
real(real64) :: duration
!
integer :: k, loop_idx
!
call make_matrix(A,dx,lx)
x = A(N/2,:)
!
y = 0
call cpu_time( start_time )
call axpy_loop (A, x, y, N_TEST)
!call axpy_loop_in (A, x, y, N_TEST)
!
call cpu_time( end_time )
!
duration = end_time-start_time
!
if( duration < 0.01 )then
write( *, "('Total time:',f10.6)" ) duration
else
write( *, "('Total time:',f10.3)" ) duration
end if
!
write(*,"('Sum = ',ES14.5E3)") sum(y)
!
contains
!
!< #brief compute y = y + A^nx
!! #param[in] A matrix to use
!! #param[in] x vector to used
!! #param[in, out] y output
!! #param[in] nloop number of iterations, power to apply to A
!!
!>
subroutine axpy_loop_in (A, x, y, nloop)
real(real64), dimension(:,:), intent(in) :: A
real(real64), dimension(:), intent(in) :: x
real(real64), dimension(:), intent(inout) :: y
integer, intent(in) :: nloop
!
real(real64), dimension(size(x)) :: z
integer :: k, iter
!
y = x
do iter = 1, nloop
z = y
y = 0
do k = 1, size(A,2)
y = y + A(:,k)*z(k)
end do
end do
!
end subroutine axpy_loop_in
!
!> #brief Computes the square exponential correlation kernel matrix for
!! a 1D uniform grid, using coordinate vector and scalar parameters
!! #param [in, out] C square matrix of correlation (kernel)
!! #param [in] dx grid spacing
!! #param [in] lx decorrelation length
!!
!! The correlation betwen the grid points i and j is given by
!! \f$ C(i,j) = \exp(\frac{-(xi-xj)^2}{2l_xi l_xj}) \f$
!! where xi and xj are respectively the coordinates of point i and j
!>
subroutine make_matrix(C, dx, lx)
! some definitions of the square correlation
! uses 2l^2 while other use l^2
! l^2 is used here by setting this factor to 1.
real(real64), parameter :: factor = 1.0
!
real(real64), dimension(:,:), intent(in out) :: C
real(real64), intent(in) :: dx
real(real64) lx
! Local variables
real(real64), dimension(size(x)) :: nfacts
real :: dist, denom
integer :: ii, jj
!
do jj=1, size(C,2)
do ii=1, size(C,1)
dist = (ii-jj)*dx
denom = factor*lx*lx
C(ii, jj) = exp( -dist*dist/denom )
end do
! compute normalization factors
nfacts(jj) = sqrt( sum( C(:, jj) ) )
end do
!
! normalize to prevent arbitrary growth in those tests
! where we apply the exponential of the matrix
do jj=1, size(C,2)
do ii=1, size(C,1)
C(ii, jj) = C(ii, jj)/( nfacts(ii)*nfacts(jj) )
end do
end do
! remove the very small
where( C<epsilon(1.) ) C=0.
!
end subroutine make_matrix
!
end program test_simple
!
Code (test_mod.f90)
!> #file test_mod.f90
!! simple operations
!<
!< #brief module for simple operations
!!
!>
module test_mod
use iso_fortran_env
implicit none
contains
!
!< #brief compute y = y + A^nx
!! #param[in] A matrix to use
!! #param[in] x vector to used
!! #param[in, out] y output
!! #param[in] nloop number of iterations, power to apply to A
!!
!>
subroutine axpy_loop( A, x, y, nloop )
real(real64), dimension(:,:), intent(in) :: A
real(real64), dimension(:), intent(in) :: x
real(real64), dimension(:), intent(inout) :: y
integer, intent(in) :: nloop
!
real(real64), dimension(size(x)) :: z
integer :: k, iter
!
y = x
do iter = 1, nloop
z = y
y = 0
do k = 1, size(A,2)
y = y + A(:,k)*z(k)
end do
end do
!
end subroutine axpy_loop
!
end module test_mod
compile as
gfortran -O3 -o simple test_mod.f90 test_simple.f90
run as
./simple
The combination of flags -march=native and -flto is the solution to the problem, at least on my testing computers. With those options, the program is fully optimized and there is no difference between having the subroutine in the same file as the main program, or in a separate file (separate module). In addition, the runtime is comparable to the runtime with Portland Group compiler. Any one of these options alone did not solved the problem. -march=native alone speeds the in contain version but makes the module version worse.
My biased thinking is that the option -march=native should be default; users doing something else are experienced and know what they are doing so they can add the appropriate option or disable the default, whereas the common user will not easily think of it.
Thank you for all the comments.
I want to perform a Matrix-Vector product in fortran using the SGEMV subroutine from BLAS.
I have a code that is similar to this:
program test
integer, parameter :: DP = selected_real_kind(15)
real(kind=DP), dimension (3,3) :: A
real(kind=DP), dimension (3) :: XP,YP
call sgemv(A,XP,YP)
A is a 3x3 Matrix, XP and YP are Vectors.
In the included module one can see the following code:
PURE SUBROUTINE SGEMV_F95(A,X,Y,ALPHA,BETA,TRANS)
! Fortran77 call:
! SGEMV(TRANS,M,N,ALPHA,A,LDA,X,INCX,BETA,Y,INCY)
USE F95_PRECISION, ONLY: WP => SP
REAL(WP), INTENT(IN), OPTIONAL :: ALPHA
REAL(WP), INTENT(IN), OPTIONAL :: BETA
CHARACTER(LEN=1), INTENT(IN), OPTIONAL :: TRANS
REAL(WP), INTENT(IN) :: A(:,:)
REAL(WP), INTENT(IN) :: X(:)
REAL(WP), INTENT(INOUT) :: Y(:)
END SUBROUTINE SGEMV_F95
I understand that the some of the parameters are optional, so where am i wrong in the method call?
When you look at BLAS or LAPACK routines then you should always have a look at the first letter:
S: single precision
D: double precision
C: single precision complex
Z: double precision complex
You defined your matrix A as well as the vectors XP and YP as a double precision number using the statement:
integer, parameter :: DP = selected_real_kind(15)
So for this, you need to use dgemv or define your precision as single precision.
There is also a difference between calling dgemv and dgemv_f95. dgemv_f95 is part of Intel MKL and not really a common naming. For portability reasons, I would not use that notation but stick to the classic dgemv which is also part of Intel MKL.
DGEMV performs one of the matrix-vector operations
y := alpha*A*x + beta*y, or y := alpha*A**T*x + beta*y,
where alpha and beta are scalars, x and y are vectors and A is an
m by n matrix.
If you want to know how to call the function, I suggest to have a look here, but it should, in the end, look something like this:
call DGEMV('N',3,3,ALPHA,A,3,XP,1,BETA,YP,1)
The precisions are incompatible. You are calling sgemv which takes single precision arguments but you are passing double precision arrays and vectors.
Perhaps the trans parameter is required?
trans: Must be 'N', 'C', or 'T'.
(As per the note at the bottom of Developer Reference for IntelĀ® Math Kernel Library - Fortran.)
I am trying to draw random deviates from a multinormal distribution.
There are two subroutines I found online: "setgmn" and "genmn", which are supposed to do exactly that. They are from the "ranlib" library.
http://www.netlib.org/random/
However, I have no clue how those to subroutines are supossed to work together.
Setgmn combines the data in such a way that genmn can uses it. However I struggle wih the "parm" argument I am not exactly sure what I have to pass into it. And how those two subroutines need to be combined to actually work.
I could not find an example online. So I thought maybe someone here already used them or has a link to an example online.
From the source:
First you have to call setgmn. This procedure will set the argument parm that you have to pass to getmn.
subroutine setgmn ( meanv, covm, p, parm )
integer ( kind = 4 ) p
real ( kind = 4 ) covm(p,p)
real ( kind = 4 ) meanv(p)
real ( kind = 4 ) parm(p*(p+3)/2+1)
Parameters:
MEANV: the means of the multivariate normal distribution.
COVM: On input, the covariance matrix of the multivariate distribution. On output, the information in COVM has been overwritten.
P: the number of dimensions.
PARM: parameters needed to generate multivariate normal deviates.
PARM(1) contains the size of the deviates, P
PARM(2:P+1) contains the mean vector.
PARM(P+2:P*(P+3)/2+1) contains the upper half of the Cholesky decomposition of the covariance matrix.
subroutine genmn ( parm, x, work )
real ( kind = 4 ) parm(*)
real ( kind = 4 ) work(*)
real ( kind = 4 ) x(*)
Parameters:
PARM(P*(P+3)/2+1) : parameters set by SETGMN.
X(P): a random deviate from the distribution. Will be the output.
WORK(P): workspace
Here is an alternative modern implementation of what you need, which works with the Cholesky factorization of the MVN covariance matrix instead of the covariance matrix itself, followed by an example on how to use it:
module RandMVN_mod
implicit none
contains
!***********************************************************************************************************************************
!***********************************************************************************************************************************
! Returns a multivariate normal random number.
function getRandMVN(nd,MeanVec,CholeskyLower,Diagonal) result(RandMVN)
use, intrinsic :: iso_fortran_env, only: IK => int32, RK => real64
implicit none
integer(IK), intent(in) :: nd ! dimension of the MVN
real(RK) , intent(in) :: MeanVec(nd) ! The Mean vector of the MVN from which points are drawn
real(RK) , intent(in) :: CholeskyLower(nd,nd) ! Lower Triangle of the Cholesky Factorization of the covariance matrix of the MVN
real(RK) , intent(in) :: Diagonal(nd) ! Diagonal terms of the Cholesky Factorization of the covariance matrix of the MVN
real(RK) :: RandMVN(nd), dummy
integer(IK) :: j,i
RandMVN = 0._RK
do j = 1,nd
dummy = getRandGaus()
RandMVN(j) = RandMVN(j) + Diagonal(j) * dummy
do i = j+1,nd
RandMVN(i) = RandMVN(i) + CholeskyLower(i,j) * dummy
end do
end do
RandMVN = RandMVN + MeanVec
end function getRandMVN
!***********************************************************************************************************************************
!***********************************************************************************************************************************
! returns a normally distributed random number with zero mean and unit variance.
function getRandGaus()
use, intrinsic :: iso_fortran_env, only: IK => int32, RK => real64
implicit none
integer(IK), save :: iset=0
real(RK) , save :: gset
real(RK) :: fac,rsq,vec(2)
real(RK) :: getRandGaus
if (iset == 0) then
do
call random_number(vec)
vec = 2._RK*vec - 1._RK
rsq = vec(1)**2 + vec(2)**2
if ( rsq > 0._RK .and. rsq < 1._RK ) exit
end do
fac = sqrt(-2._RK*log(rsq)/rsq)
gset = vec(1)*fac
getRandGaus = vec(2)*fac
iset = 1
else
getRandGaus = gset
iset = 0
endif
end function getRandGaus
!***********************************************************************************************************************************
!***********************************************************************************************************************************
! Returns the the Cholesky factorization of input positive definite matrix PosDefMat
subroutine getCholeskyFactor(nd,PosDefMat,Diagonal)
use, intrinsic :: iso_fortran_env, only: IK => int32, RK => real64
implicit none
integer(IK), intent(in) :: nd
real(RK) , intent(inout) :: PosDefMat(nd,nd) ! Upper triangle + diagonal is input matrix, lower is output.
real(RK) , intent(out) :: Diagonal(nd)
real(RK) :: summ
integer(IK) :: i
do i=1,nd
summ = PosDefMat(i,i) - dot_product(PosDefMat(i,1:i-1),PosDefMat(i,1:i-1))
if (summ <= 0._RK) then
error stop
end if
Diagonal(i) = sqrt(summ)
PosDefMat(i+1:nd,i) = ( PosDefMat(i,i+1:nd) - matmul(PosDefMat(i+1:nd,1:i-1),PosDefMat(i,1:i-1)) ) / Diagonal(i)
end do
end subroutine getCholeskyFactor
!***********************************************************************************************************************************
!***********************************************************************************************************************************
end module RandMVN_mod
program test_RandMVN
use, intrinsic :: iso_fortran_env, only: IK => int32, RK => real64
use RandMVN_mod
implicit none
integer(IK) :: isample
integer(IK) , parameter :: nd = 2 ! dimension of the Multivariate distribution (MVN)
integer(IK) , parameter :: nsample = 100 ! count of random numbers
real(RK) , parameter :: CovMat(nd,nd) = reshape([ 1._RK , 0.5_RK , 0.5_RK , 1._RK ], shape=shape(CovMat)) ! covariance matrix of MVN
real(RK) , parameter :: MeanVec(nd) = [ 1._RK , 1._RK ] ! mean vector of the MVN
real(RK) :: CholeskyLower(nd,nd), Diagonal(nd)
! get the Cholesky factorization of the Covariance Matrix
CholeskyLower = CovMat
call getCholeskyFactor(nd,CholeskyLower,Diagonal)
! get random MVN vectors form the MVN distribution
do isample = 1,nsample
write(*,"(*(g0,' '))") getRandMVN(nd,MeanVec,CholeskyLower,Diagonal), " ;"
end do
end program test_RandMVN
If you need many random vectors from the same MVN distribution with the same covariance matrix, then passing the Cholesky factorization to the procedure (as done above) makes it far more efficient, for it does not not need to compute it from the input covariance matrix on every call to the procedure.
I wrote a Fortran code that calculates the ith-permutation of a given list {1,2,3,...,n}, without computing all the others, that are n! I needed that in order to find the ith-path of the TSP (Travelling salesman problem).
When n! is big, the code gives me some error and I tested that the ith-permutation found is not the exact value. For n=10, there are not problems at all, but for n=20, the code crashes or wrong values are found. I think this is due to errors that Fortran makes operating with big numbers (sums of big numbers).
I use Visual Fortran Ultimate 2013. In attached you find the subroutine I use for my goal. WeightAdjMatRete is the distance matrix between each pair of knots of the network.
! Fattoriale
RECURSIVE FUNCTION factorial(n) RESULT(n_factorial)
IMPLICIT NONE
REAL, INTENT(IN) :: n
REAL :: n_factorial
IF(n>0) THEN
n_factorial=n*factorial(n-1)
ELSE
n_factorial=1.
ENDIF
ENDFUNCTION factorial
! ith-permutazione di una lista
SUBROUTINE ith_permutazione(lista_iniziale,n,i,ith_permutation)
IMPLICIT NONE
INTEGER :: k,n
REAL :: j,f
REAL, INTENT(IN) :: i
INTEGER, DIMENSION(1:n), INTENT(IN) :: lista_iniziale
INTEGER, DIMENSION(1:n) :: lista_lavoro
INTEGER, DIMENSION(1:n), INTENT(OUT) :: ith_permutation
lista_lavoro=lista_iniziale
j=i
DO k=1,n
f=factorial(REAL(n-k))
ith_permutation(k)=lista_lavoro(FLOOR(j/f)+1)
lista_lavoro=PACK(lista_lavoro,MASK=lista_lavoro/=ith_permutation(k))
j=MOD(j,f)
ENDDO
ENDSUBROUTINE ith_permutazione
! Funzione modulo, adattata
PURE FUNCTION mood(k,modulo) RESULT(ris)
IMPLICIT NONE
INTEGER, INTENT(IN) :: k,modulo
INTEGER :: ris
IF(MOD(k,modulo)/=0) THEN
ris=MOD(k,modulo)
ELSE
ris=modulo
ENDIF
ENDFUNCTION mood
! Funzione quoziente, adattata
PURE FUNCTION quoziente(a,p) RESULT(ris)
IMPLICIT NONE
INTEGER, INTENT(IN) :: a,p
INTEGER :: ris
IF(MOD(a,p)/=0) THEN
ris=(a/p)+1
ELSE
ris=a/p
ENDIF
ENDFUNCTION quoziente
! Vettori contenenti tutti i payoff percepiti dagli agenti allo state vector attuale e quelli ad ogni sua singola permutazione
SUBROUTINE tuttipayoff(n,m,nodi,nodi_rete,sigma,bvector,MatVecSomma,VecPos,lista_iniziale,ith_permutation,lunghezze_percorso,WeightAdjMatRete,array_perceived_payoff_old,array_perceived_payoff_neg)
IMPLICIT NONE
INTEGER, INTENT(IN) :: n,m,nodi,nodi_rete
INTEGER, DIMENSION(1:nodi), INTENT(IN) :: sigma
INTEGER, DIMENSION(1:nodi), INTENT(OUT) :: bvector
REAL, DIMENSION(1:m,1:n), INTENT(OUT) :: MatVecSomma
REAL, DIMENSION(1:m), INTENT(OUT) :: VecPos
INTEGER, DIMENSION(1:nodi_rete), INTENT(IN) :: lista_iniziale
INTEGER, DIMENSION(1:nodi_rete), INTENT(OUT) :: ith_permutation
REAL, DIMENSION(1:nodi_rete), INTENT(OUT) :: lunghezze_percorso
REAL, DIMENSION(1:nodi_rete,1:nodi_rete), INTENT(IN) :: WeightAdjMatRete
REAL, DIMENSION(1:nodi), INTENT(OUT) :: array_perceived_payoff_old,array_perceived_payoff_neg
INTEGER :: i,j,k
bvector=sigma
FORALL(i=1:nodi,bvector(i)==-1)
bvector(i)=0
ENDFORALL
FORALL(i=1:m,j=1:n)
MatVecSomma(i,j)=bvector(m*(j-1)+i)*(2.**REAL(n-j))
ENDFORALL
FORALL(i=1:m)
VecPos(i)=1.+SUM(MatVecSomma(i,:))
ENDFORALL
DO k=1,nodi
IF(VecPos(mood(k,m))<=factorial(REAL(nodi_rete))) THEN
CALL ith_permutazione(lista_iniziale,nodi_rete,VecPos(mood(k,m))-1.,ith_permutation)
FORALL(i=1:(nodi_rete-1))
lunghezze_percorso(i)=WeightAdjMatRete(ith_permutation(i),ith_permutation(i+1))
ENDFORALL
lunghezze_percorso(nodi_rete)=WeightAdjMatRete(ith_permutation(nodi_rete),ith_permutation(1))
array_perceived_payoff_old(k)=(1./SUM(lunghezze_percorso))
ELSE
array_perceived_payoff_old(k)=0.
ENDIF
IF(VecPos(mood(k,m))-SIGN(1,sigma(m*(quoziente(k,m)-1)+mood(k,m)))*2**(n-quoziente(k,m))<=factorial(REAL(nodi_rete))) THEN
CALL ith_permutazione(lista_iniziale,nodi_rete,VecPos(mood(k,m))-SIGN(1,sigma(m*(quoziente(k,m)-1)+mood(k,m)))*2**(n-quoziente(k,m))-1.,ith_permutation)
FORALL(i=1:(nodi_rete-1))
lunghezze_percorso(i)=WeightAdjMatRete(ith_permutation(i),ith_permutation(i+1))
ENDFORALL
lunghezze_percorso(nodi_rete)=WeightAdjMatRete(ith_permutation(nodi_rete),ith_permutation(1))
array_perceived_payoff_neg(k)=(1./SUM(lunghezze_percorso))
ELSE
array_perceived_payoff_neg(k)=0.
ENDIF
ENDDO
ENDSUBROUTINE tuttipayoff
Don't use floating-point numbers to represent factorials; factorials are products of integers and are therefore best represented as integers.
Factorials grow big fast, so it may be tempting to use reals, because reals can represent huge numbers like 1.0e+30. But floating-point numbers are precise only with relation to their magnitude; their mantissa still has a limited size, they can be huge because their exponents may be huge.
A 32-bit real can represent exact integers up to about 16 million. After that, only every even integer can be represented up to 32 million and every fourth integer up to 64 million. 64-bit integers are better, because they can represent exact integers up to 9 quadrillion.
64-bit integers can go 1024 times further: They can represent 2^63 or about 9 quintillion (9e+18) integers. That is enough to represent 20!:
20! = 2,432,902,008,176,640,000
2^63 = 9,223,372,036,854,775,808
Fortran allows you to select a kind of integer based on the decimal places it should be able to represent:
integer, (kind=selected_int_kind(18))
Use this to do your calculations with 64-bit integers. This will give you factorials up to 20!. It won't go further than that, though: Most machines support only integers up to 64 bit, so selected_int_kind(19) will give you an error.
Here's the permutation part of your program with 64-bit integers. Note how all the type conversions ald floors and ceilings disappear.
program permute
implicit none
integer, parameter :: long = selected_int_kind(18)
integer, parameter :: n = 20
integer, dimension(1:n) :: orig
integer, dimension(1:n) :: perm
integer(kind=long) :: k
do k = 1, n
orig(k) = k
end do
do k = 0, 2000000000000000_long, 100000000000000_long
call ith_perm(perm, orig, n, k)
print *, k
print *, perm
print *
end do
end program
function fact(n)
implicit none
integer, parameter :: long = selected_int_kind(18)
integer(kind=long) :: fact
integer, intent(in) :: n
integer :: i
fact = 1
i = n
do while (i > 1)
fact = fact * i
i = i - 1
end do
end function fact
subroutine ith_perm(perm, orig, n, i)
implicit none
integer, parameter :: long = selected_int_kind(18)
integer, intent(in) :: n
integer(kind=long), intent(in) :: i
integer, dimension(1:n), intent(in) :: orig
integer, dimension(1:n), intent(out) :: perm
integer, dimension(1:n) :: work
integer :: k
integer(kind=long) :: f, j
integer(kind=long) :: fact
work = orig
j = i
do k = 1, n
f = fact(n - k)
perm(k) = work(j / f + 1)
work = pack(work, work /= perm(k))
j = mod(j, f)
end do
end subroutine ith_perm