I faced a problem, where multiplication not null matrix on identity matrix via Lapack gives me a null matrix. All matrices are with positive elements
Dimensions of the matrices:
W = M1*N1
D = N1*N1
M = M1*M1
D M -identity matrices
What I try to do is to get multiplication of D*W'*M , where W' is a transpose of W. Here is a Fortran code using DGEMM operation
PROGRAM SIRT
DOUBLE PRECISION,ALLOCATABLE,DIMENSION(:,:) :: W, D, M, C, MAIN
DOUBLE PRECISION,ALLOCATABLE,DIMENSION(:) :: b, x
DOUBLE PRECISION :: pho
INTEGER, PARAMETER :: M1 = 27000
INTEGER, PARAMETER :: N1 = 1000
INTEGER, PARAMETER :: num_iterations = 200
INTEGER :: i, j, k
allocate(W(1:M1,1:N1))
allocate(D(1:N1,1:N1))
allocate(M(1:M1,1:M1))
allocate(C(1:N1,1:M1))
allocate(MAIN(1:N1,1:M1))
allocate(b(1:M1))
allocate(x(1:N1))
D = 0
M = 0
DO i=1,N1
D(i,i) = 1
END DO
DO i=1,M1
M(i,i) = 1
END DO
OPEN(UNIT=11, FILE="Wmpi.txt")
DO i = 1,M1
READ(11,*) (W(i,j),j=1,N1)
END DO
print *,ANY(W>0)
CLOSE (11, STATUS='KEEP')
OPEN(UNIT=11, FILE="bmpi.txt")
DO i = 1,M1
READ(11,*) b(i)
END DO
CLOSE (11, STATUS='KEEP')
CALL DGEMM('N', 'T', N1, N1, N1, 1.0, D, N1, W, N1, 0.0, C, N1)
print *,ANY(C>0)
CALL DGEMM('N', 'N', N1, M1, M1, 1.0, C, N1, M, M1, 0.0, MAIN, N1)
print *,ANY(MAIN>0)
pho = DLANGE('F', N1, M1, C, N1, x)
END PROGRAM SIRT
The answers of sequential print are True,True,False. So the first multiplication works and I get not null matrix, but in the second all elements are 0.
I know that I don't need matrix multiplication on identity matrices, but I want to figure out what's the problem if I do that.
Another question can I do it memory efficient without temp matrices Main and C?
EDIT
Figured out downloading a resulting matrix after first multiplication that all elements are null. Can't understand why ANY(C>0) is True at second stage.
Firstly note that DGEMM is part of the BLAS library, not LAPACK - the latter is a higher level library.
You have a few problems with your calls to DGEMM
Real constant have kind, and you must provide the correct kind. In
particular DGEMM expects what in old fashioned Fortran was called
Double Precision. The constants you provided are default kind real.
This will cause errors, and I would strongly recommend providing a
kind for ALL real constants if the precision required by your program
is not the default precision
Of the first three integers the first two are always the shape of
the result matrix. C is n1xm1, hence the change to the first call to
DGEMM
The leading dimension of a matrix is 99.999% of the time the first
dimension as allocated/declared - it has nothing to do with the
maths at all, it is purely so DGEMM can work out how the matrix is
laid out in memory. It was wrong for W in the first DGEMM
I would also suggest that as a proper check of the results is fairly easy to do by use of the MATUL intrinsic do that rather than the half hearted one you do in the original, and avoid using unit matrices for tests unless you really want unit matrices, as the high symmetry and large numbers of zeros can easily mask errors.
Pulling this all together, cutting our the irrelevant parts and modifying your program to a slightly more modern form I get
Program sirt
Integer, Parameter :: wp = Kind( 1.0d0 )
Real( wp ),Allocatable,Dimension(:,:) :: w, d, m, c, main, main_compare
Real( wp ),Allocatable,Dimension(:) :: b, x
! Change problem size to something manageable on my laptop
!!$ Integer, Parameter :: m1 = 27000
!!$ Integer, Parameter :: n1 = 1000
Integer, Parameter :: m1 = 2700
Integer, Parameter :: n1 = 100
!!$ Integer :: i
Allocate(w(1:m1,1:n1))
Allocate(d(1:n1,1:n1))
Allocate(m(1:m1,1:m1))
Allocate(c(1:n1,1:m1))
Allocate(main(1:n1,1:m1))
Allocate(main_compare(1:n1,1:m1))
Allocate(b(1:m1))
Allocate(x(1:n1))
d = 0.0_wp
m = 0.0_wp
! Don't trust unit matrices for tests, too much symmetry, too many zeros - use Random numbers
!!$ Do i=1,n1
!!$ d(i,i) = 1.0_wp
!!$ End Do
Call Random_number( d )
!!$
!!$ Do i=1,m1
!!$ m(i,i) = 1.0_wp
!!$ End Do
Call Random_number( m )
! Don't have the file - use random numbers
!!$ open(unit=11, file="wmpi.txt")
!!$ do i = 1,m1
!!$ read(11,*) (w(i,j),j=1,n1)
!!$ end do
!!$ print *,any(w>0)
!!$ close (11, status='keep')
Call random_Number( w )
! 1) Real constant have kind, and you must provide the correct kind
! 2) Of the first three constant the first two are always the shape
! of the result matrix. C is n1xm1, hece the change
! 3) The leading dimension of a matrix is 99.999% of the time
! the first dimension as allocated/declared -it has nothing
! to do with the maths at all. It was wrong for W in the
! first matmul
! 4) DGEMM is part of the BLAS library - not LAPACK
!!$ CALL DGEMM('N', 'T', N1, N1, N1, 1.0, D, N1, W, N1, 0.0, C, N1)
!!$ CALL DGEMM('N', 'N', N1, M1, M1, 1.0, C, N1, M, M1, 0.0, MAIN, N1)
Call dgemm('n', 't', n1, m1, n1, 1.0_wp, d, n1, w, m1, 0.0_wp, c, n1)
Call dgemm('n', 'n', n1, m1, m1, 1.0_wp, c, n1, m, m1, 0.0_wp, main, n1)
! 5) Don't do half hearted checks on the results when proper checks are easy
main_compare = Matmul( Matmul( d, Transpose( w ) ), m )
Write( *, * ) 'Max error ', Maxval( Abs( main - main_compare ) )
! Check we haven't somehow managed all zeros in both matrices ...
Write( *, * ) main( 1:3, 1 )
Write( *, * ) main_compare( 1:3, 1 )
End Program sirt
ian#eris:~/work/stack$ gfortran-8 -std=f2008 -fcheck=all -Wall -Wextra -pedantic -O matmul.f90 -lblas
/usr/bin/ld: warning: libgfortran.so.4, needed by //usr/lib/x86_64-linux-gnu/libopenblas.so.0, may conflict with libgfortran.so.5
ian#eris:~/work/stack$ ./a.out
Max error 3.6379788070917130E-011
38576.055405987529 33186.640731082334 33818.909332709263
38576.055405987536 33186.640731082334 33818.909332709263
ian#eris:~/work/stack$ ./a.out
Max error 2.9103830456733704E-011
34303.739077708480 34227.623080598998 34987.143088270866
34303.739077708473 34227.623080598998 34987.143088270859
ian#eris:~/work/stack$ ./a.out
Max error 3.2741809263825417E-011
35968.603030053979 34778.110740682620 32732.657800858586
35968.603030053971 34778.110740682612 32732.657800858586
ian#eris:~/work/stack$ ./a.out
Max error 2.9103830456733704E-011
31575.076511213174 35879.913361891951 35278.030249048912
31575.076511213178 35879.913361891951 35278.030249048912
Related
I am using LAPACK to perform some eigendecomposition on two relatively small matrices (15x15, in this case). These matrices are from an optimisation problem, but their origin is not of great importance. They are both quite similar as it is a small step in an optimisation algorithm, so I would expect their eigenvectors and eigenvalues to be of similar magnitude, and certainly the same sign. However, after obtaining the eigenvectors of both matrices using LAPACK's DSYEVD (I have also tried DSYEV with the same result), I find that some eigenvectors show expected similarity, but others have total inversion of their signs which is rather strange. WeTransfer link for these matrices: https://wetransfer.com/downloads/784e73883eae7c63aa06aa70b64d462b20220831121100/b7519a.
I am using a module for maths that contains two functions, one for eigenvalues and one for eigenvectors.
MODULE math
implicit none
contains
function EVALS(matrix, n) result(eigenvals)
! Here, the eigenvalues from a square, symmetric, real matrix are calculated.
! The eigenvectors associated with said eigenvalues are not given as a result, but can be obtained with the function EVEVS.
!
! ARGUMENTS: matrix : 2D array containing the matrix which the eigenvalues of will be calculated.
! n : integer which represents the number of rows/columns (it doesn't matter which as the matrix is square).
implicit none
integer(i4b), intent(in) :: n
integer(i4b) :: LDA, LWORK, LIWORK, INFO
real(dp), intent(in) :: matrix(n, n)
real(dp), allocatable :: WORK(:), IWORK(:)
real(dp) :: eigenvals(n), work_mat(n,n)
character :: JOBZ, UPLO
! Assigning working matrix...
work_mat(:,:) = 0.0
work_mat(:,:) = matrix(:,:)
! Initialising some values....
JOBZ = 'N'
UPLO = 'U'
LDA = n
INFO = 0
! Allocating working array...
LWORK = MAX(1, (1 + 6*n + 2*n**2))
LIWORK = MAX(1, (3 + 5*n))
allocate(WORK(LWORK))
allocate(IWORK(LIWORK))
WORK(:) = 0
IWORK(:) = 0
! Obtaining eigenvalues...
eigenvals(:) = 0.0
call DSYEVD(JOBZ, UPLO, n, work_mat, LDA, eigenvals, WORK, LWORK, IWORK, LIWORK, INFO)
deallocate(WORK)
end function EVALS
function EVECS(matrix, n) result(eigenvecs)
! Here, the eigenvectors of a square, symmetric, real matrix are calculated.
! The eigenvalues associated with said eigenvectors are not given as a result, but can be obtained with the function EVALS.
!
! ARGUMENTS: matrix : 2D array containing the matrix which the eigenvalues of will be calculated.
! n : integer which represents the number of rows/columns (it doesn't matter which as the matrix is square).
implicit none
integer(i4b), intent(in) :: n
integer(i4b) :: LDA, LWORK, LIWORK, INFO
real(dp), intent(in) :: matrix(n, n)
real(dp), allocatable :: WORK(:), IWORK(:)
real(dp) :: eigenvecs(n, n), eigenvals(n), work_mat(n,n)
character :: JOBZ, UPLO
! Assigning working matrix...
work_mat(:,:) = 0.0
work_mat(:,:) = matrix(:,:)
! Initialising some values....
JOBZ = 'V'
UPLO = 'U'
LDA = n
INFO = 0
! Allocating working array....
LWORK = MAX(1, (1 + 6*n + 2*n**2))
LIWORK = MAX(1, (3 + 5*n))
allocate(WORK(LWORK))
allocate(IWORK(LIWORK))
WORK(:) = 0
IWORK(:) = 0
! Obtaining eigenvectors...
eigenvals(:) = 0.0
call DSYEVD(JOBZ, UPLO, n, work_mat, LDA, eigenvals, WORK, LWORK, IWORK, LIWORK, INFO)
eigenvecs(:,:) = 0.0
eigenvecs(:,:) = work_mat(:,:)
deallocate(WORK)
end function EVECS
END MODULE math
Now, below is an minimal example of how these functions are used within another source code file. I have printed two eigenvectors (one from each matrix) which are very similar in their values, but the signs are opposite. If you were to look at other corresponding eigenvectors, some are more or less similar (as expected), and others show this same opposite signs behaviour.
program eigen_test
use math
implicit none
integer(i4b) :: i, j
integer(i4b), parameter :: npr=15
real(dp) :: mat_a(npr,npr), mat_b(npr,npr)
real(dp) :: eigenvals_a(npr), eigenvals_b(npr)
real(dp) :: eigenvecs_a(npr,npr), eigenvecs_b(npr,npr)
open(10, file="mat_a")
read(10,*) ((mat_a(i,j), j=1,npr), i=1,npr)
open(11, file="mat_b")
read(11,*) ((mat_b(i,j), j=1,npr), i=1,npr)
eigenvals_a = EVALS(mat_a, npr)
eigenvecs_a = EVECS(mat_a, npr)
eigenvals_b = EVALS(mat_b, npr)
eigenvecs_b = EVECS(mat_b, npr)
print *, eigenvecs_a(:,4)
print *, eigenvecs_b(:,4)
end program eigen_test
Perhaps this is a problem with my eigendecomposition understanding, or maybe I am not using the LAPACK function correctly, but I hope the problem is clear and reproducible.
Thanks in advance!
I am evaluating the overhead cost (in wall clock time) of some features in fortran programs. And I came across the following behavior with GNU fortran, that I did not expect: having the subroutine in the same file as the main program (in the contain region or in a module) versus having the subroutine in a separate module (in separate file) has a big impact.
The simple code that reproduces the behavior is:
I have a subroutine that does a matrix-vector multiplication 250000 times. In the first test, I have a subroutine in the contain region of the main program. In the second test, the same subroutine is in a separate module.
The difference in performance between the two is big.
The subroutine in the contain region of the main program, 10 runs yields
min: 1.249
avg: 1.266
1.275 - 1.249 - 1.264 - 1.279 - 1.266 - 1.253 - 1.271 - 1.251 - 1.269 - 1.284
The subroutine in separate module, 10 runs yields
min: 1.848
avg: 1.861
1.848 - 1.862 - 1.853 - 1.871 - 1.854 - 1.883 - 1.810 - 1.860 - 1.886 - 1.884
About 50% slower, this factor seems consistent with the size of the matrix as
well as the number of iterations.
those tests are done with gfortran 4.8.5. With gfortran 8.3.0, the program runs a little faster, but the time doubles from the subroutine in the contain section of the main program to the subroutine in a separate module.
Portland group does not have that problem with my test program and it run even faster than the best case of gfortran.
If I read the size of the matrix from an input file (or runtime command line arg) and do dynamic allocation, then the difference in wall clock time goes away and both cases run slower (wall clock time of the subroutine in the separate module, separate file). I suspect that gfortran is able to optimize the main program better if the size of the matrix is known at compile time in the main program.
What am I doing wrong that GNU Compilers do not like, or what is GNU compiler doing poorly? Are there compiling flags to to help gfortran in such cases?
Everything is compiled with optimization -O3
Code (test_simple.f90)
!< #file test_simple.f90
!! simple test
!>
!
program test_simple
!
use iso_fortran_env
use test_mod
!
implicit none
!
integer, parameter :: N = 100
integer, parameter :: N_TEST = 250000
logical, parameter :: GENERATE=.false.
!
real(real64), parameter :: dx = 10.0_real64
real(real64), parameter :: lx = 40.0_real64
!
real(real64), dimension(N,N) :: A
real(real64), dimension(N) :: x, y
real(real64) :: start_time, end_time
real(real64) :: duration
!
integer :: k, loop_idx
!
call make_matrix(A,dx,lx)
x = A(N/2,:)
!
y = 0
call cpu_time( start_time )
call axpy_loop (A, x, y, N_TEST)
!call axpy_loop_in (A, x, y, N_TEST)
!
call cpu_time( end_time )
!
duration = end_time-start_time
!
if( duration < 0.01 )then
write( *, "('Total time:',f10.6)" ) duration
else
write( *, "('Total time:',f10.3)" ) duration
end if
!
write(*,"('Sum = ',ES14.5E3)") sum(y)
!
contains
!
!< #brief compute y = y + A^nx
!! #param[in] A matrix to use
!! #param[in] x vector to used
!! #param[in, out] y output
!! #param[in] nloop number of iterations, power to apply to A
!!
!>
subroutine axpy_loop_in (A, x, y, nloop)
real(real64), dimension(:,:), intent(in) :: A
real(real64), dimension(:), intent(in) :: x
real(real64), dimension(:), intent(inout) :: y
integer, intent(in) :: nloop
!
real(real64), dimension(size(x)) :: z
integer :: k, iter
!
y = x
do iter = 1, nloop
z = y
y = 0
do k = 1, size(A,2)
y = y + A(:,k)*z(k)
end do
end do
!
end subroutine axpy_loop_in
!
!> #brief Computes the square exponential correlation kernel matrix for
!! a 1D uniform grid, using coordinate vector and scalar parameters
!! #param [in, out] C square matrix of correlation (kernel)
!! #param [in] dx grid spacing
!! #param [in] lx decorrelation length
!!
!! The correlation betwen the grid points i and j is given by
!! \f$ C(i,j) = \exp(\frac{-(xi-xj)^2}{2l_xi l_xj}) \f$
!! where xi and xj are respectively the coordinates of point i and j
!>
subroutine make_matrix(C, dx, lx)
! some definitions of the square correlation
! uses 2l^2 while other use l^2
! l^2 is used here by setting this factor to 1.
real(real64), parameter :: factor = 1.0
!
real(real64), dimension(:,:), intent(in out) :: C
real(real64), intent(in) :: dx
real(real64) lx
! Local variables
real(real64), dimension(size(x)) :: nfacts
real :: dist, denom
integer :: ii, jj
!
do jj=1, size(C,2)
do ii=1, size(C,1)
dist = (ii-jj)*dx
denom = factor*lx*lx
C(ii, jj) = exp( -dist*dist/denom )
end do
! compute normalization factors
nfacts(jj) = sqrt( sum( C(:, jj) ) )
end do
!
! normalize to prevent arbitrary growth in those tests
! where we apply the exponential of the matrix
do jj=1, size(C,2)
do ii=1, size(C,1)
C(ii, jj) = C(ii, jj)/( nfacts(ii)*nfacts(jj) )
end do
end do
! remove the very small
where( C<epsilon(1.) ) C=0.
!
end subroutine make_matrix
!
end program test_simple
!
Code (test_mod.f90)
!> #file test_mod.f90
!! simple operations
!<
!< #brief module for simple operations
!!
!>
module test_mod
use iso_fortran_env
implicit none
contains
!
!< #brief compute y = y + A^nx
!! #param[in] A matrix to use
!! #param[in] x vector to used
!! #param[in, out] y output
!! #param[in] nloop number of iterations, power to apply to A
!!
!>
subroutine axpy_loop( A, x, y, nloop )
real(real64), dimension(:,:), intent(in) :: A
real(real64), dimension(:), intent(in) :: x
real(real64), dimension(:), intent(inout) :: y
integer, intent(in) :: nloop
!
real(real64), dimension(size(x)) :: z
integer :: k, iter
!
y = x
do iter = 1, nloop
z = y
y = 0
do k = 1, size(A,2)
y = y + A(:,k)*z(k)
end do
end do
!
end subroutine axpy_loop
!
end module test_mod
compile as
gfortran -O3 -o simple test_mod.f90 test_simple.f90
run as
./simple
The combination of flags -march=native and -flto is the solution to the problem, at least on my testing computers. With those options, the program is fully optimized and there is no difference between having the subroutine in the same file as the main program, or in a separate file (separate module). In addition, the runtime is comparable to the runtime with Portland Group compiler. Any one of these options alone did not solved the problem. -march=native alone speeds the in contain version but makes the module version worse.
My biased thinking is that the option -march=native should be default; users doing something else are experienced and know what they are doing so they can add the appropriate option or disable the default, whereas the common user will not easily think of it.
Thank you for all the comments.
As shown in the image below, I'm creating a program that will make a 2D animation of a truck that is made up of two articulated parts.
The truck pulls the trailer.
The trailer moves according to the docking axis on the truck.
Then, when the truck turns, the trailer should gradually align itself with the new angle of the truck, as it does in real life.
I would like to know if there is any formula or algorithm that does this calculation in an easy way.
I've already seen inverse kinematics equations, but I think for just 2 parts it would not be so complex.
Can anybody help me?
Let A be the midpoint under the front axle, B be the midpoint under the middle axle, and C be the midpoint under the rear axle. For simplicity assume that the hitch is at point B. These are all functions of time t, for example A(t) = (a_x(t), a_y(t).
The trick is this. B is moving directly towards A with the component of A's velocity in that direction. Or in symbols, dB/dt = (dA/dt).(A-B)/||A-B|| And similarly, dC/dt = (dB/dt).(B-C)/||B-C|| where . is the dot product.
This turns into a non-linear first-order system in 6 variables. This can be solved with normal techniques, such as https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods.
UPDATE: Added code
Here is a Python implementation. You can replace it with https://rosettacode.org/wiki/Runge-Kutta_method for your favorite language and your favorite linear algebra library. Or even hand-roll that.
For my example I started with A at (1, 1), B at (2, 1) and C at (2, 2). Then pulled A to the origin in steps of size 0.01. That can be altered to anything that you want.
#! /usr/bin/env python
import numpy
# Runga Kutta method.
def RK4(f):
return lambda t, y, dt: (
lambda dy1: (
lambda dy2: (
lambda dy3: (
lambda dy4: (dy1 + 2*dy2 + 2*dy3 + dy4)/6
)( dt * f( t + dt , y + dy3 ) )
)( dt * f( t + dt/2, y + dy2/2 ) )
)( dt * f( t + dt/2, y + dy1/2 ) )
)( dt * f( t , y ) )
# da is a function giving velocity of a at a time t.
# The other three are the positions of the three points.
def calculate_dy (da, A0, B0, C0):
l_ab = float(numpy.linalg.norm(A0 - B0))
l_bc = float(numpy.linalg.norm(B0 - C0))
# t is time, y = [A, B, C]
def update (t, y):
(A, B, C) = y
dA = da(t)
ab_unit = (A - B) / float(numpy.linalg.norm(A-B))
# The first term is the force. The second is a correction to
# cause roundoff errors in length to be selfcorrecting.
dB = (dA.dot(ab_unit) + float(numpy.linalg.norm(A-B))/l_ab - l_ab) * ab_unit
bc_unit = (B - C) / float(numpy.linalg.norm(B-C))
# The first term is the force. The second is a correction to
# cause roundoff errors in length to be selfcorrecting.
dC = (dB.dot(bc_unit) + float(numpy.linalg.norm(B-C))/l_bc - l_bc) * bc_unit
return numpy.array([dA, dB, dC])
return RK4(update)
A0 = numpy.array([1.0, 1.0])
B0 = numpy.array([2.0, 1.0])
C0 = numpy.array([2.0, 2.0])
dy = calculate_dy(lambda t: numpy.array([-1.0, -1.0]), A0, B0, C0)
t, y, dt = 0., numpy.array([A0, B0, C0]), .02
while t <= 1.01:
print( (t, y) )
t, y = t + dt, y + dy( t, y, dt )
By the answers I saw, I realized that the solution is not really simple and will have to be solved by an Inverse Kinematics algorithm.
This site is an example and it is a just a start, although it still does not solve everything, since the point C is fixed and in the case of the truck it should move.
Based on this Analytic Two-Bone IK in 2D article, I made a fully functional model in Geogebra, where the nucleus consists of two simple mathematical equations.
I want to create a program in Fortran that multiplies vectors from a .dat file that has the following format:
x1 y1 z1
x2 y2 z2
The index 1 and 2 refer to the vector 1 and 2, respectively. First I want do identify the vectors, so far I have
program ex2
implicit none
real*8 x
integer i
write(6,*) "Insert the vectors from vet_in.dat"
open (10, file ="vet_in.dat")
read (10,*) x(i), i=1,3
end program ex2
The line of the read(10,*) was sugested to me, I don't quite get it, I thought fortran identified the ij matrix index. And then I wanted to multiply x1.x2, y1.y2 and z1.z2, maybe the loop and de if could be used. Can you help me to proceed?
First, you need to declare x and also y as an array of rank 1 and size 3:
real*8 x(3), y(3)
And also a scalar variable for the result
real*8 result
Don't write to unit 6, but use *:
write(*,*) "Insert..."
but I wouldn't write anything at all.
Now you can read the vectors. If they are stored in rows you can read them in one go
read(10,*) x
or
read(10,*) (x(i), i=1, 3)
(read about implied do in any textbook).
and then the same for y.
Then you can make a scalar product of them:
result = dot_product(x, y)
(see https://gcc.gnu.org/onlinedocs/gfortran/DOT_005fPRODUCT.html)
or
result = sum(x*y)
or
result = 0
do i = 1, 3
result = result + x(i) * y(i)
end do
Note that real*8 is not legal standard Fortran, just a non-standard extension. You can use double precision instead until you learn kinds.
First of all I am a complete novice to FORTRAN, and most forms of programming in general. With that said I am attempting to build a box, then randomly generate x, y, z coordinates for 100 atoms. From there, the goal is to calculate the distance between each atom and perform some math on the distance result. Below is my code. Even though n is defined as 100, and will print '100', when I print cx I only get 20 results.
program energytot
implicit none
integer :: i, n, j, seed(12), k, m
double precision:: sigma, r, epsilon, lx, ly, lz
double precision, dimension(:), allocatable :: cx, cy, cz, dx, dy, dz, x, y, z, LJx, LJy, LJz
allocate(x(n), y(n), z(n), LJx(n), LJy(n), LJz(n), dx(n), dy(n), dz(n))
n = 100 !Number of molecules inside the box
sigma = 4.1
epsilon = 1.7
!Box length with respect to the axis
lx = 15
ly = 15
lz = 15
do i=1,12
seed(i)=1+3
end do
!generate n random numbers for x, y, z
call RANDOM_SEED(PUT = seed)
call random_number(x)
call random_number(y)
call random_number(z)
!convert random numbers into x, y, z coordinates with (0,0,0) as the central point
cx = ((2*x)-1)*(lx*0.5)
cy = ((2*y)-1)*(lx*0.5)
cz = ((2*z)-1)*(lz*0.5)
do j=1,n-1
do k=j+1,n
dx = ABS((cx(j) - cx(j+1)))
LJx = 4 * epsilon * ((sigma/dx(j))**12 - (sigma/dx(j))**6)
dy = ABS((cy(j) - cy(j+1)))
LJy = 4 * epsilon * ((sigma/dy(j))**12 - (sigma/dy(j))**6)
dz = ABS((cz(j) - cz(j+1)))
LJz = 4 * epsilon * ((sigma/dz(j))**12 - (sigma/dz(j))**6)
end do
end do
print*,cx
print*,x
end program energytot
You declare cx (and cy and cz) allocatable, but you do not allocate space for them. Moreover, before you assign a value to variable n, you use it as the number of elements to allocate for your other allocatables. Why do any of those even need to be dynamically allocated in the first place?
I would replace this code ...
integer :: i, n, j, seed(12), k, m
double precision:: sigma, r, epsilon, lx, ly, lz
double precision, dimension(:), allocatable :: cx, cy, cz, dx, dy, dz, x, y, z, LJx, LJy, LJz
allocate(x(n), y(n), z(n), LJx(n), LJy(n), LJz(n), dx(n), dy(n), dz(n))
n = 100 !Number of molecules inside the box
... with this:
integer, parameter :: n = 100
integer :: i, j, seed(12), k, m
double precision :: sigma, r, epsilon, lx, ly, lz
double precision, dimension(n) :: cx, cy, cz, dx, dy, dz, x, y, z, LJx, LJy, LJz
I also observe that in the loop where you compute distances, you loop over variable k, but you do not use its value. As a result, it looks like you compute the same distances many times over.