First of all I am a complete novice to FORTRAN, and most forms of programming in general. With that said I am attempting to build a box, then randomly generate x, y, z coordinates for 100 atoms. From there, the goal is to calculate the distance between each atom and perform some math on the distance result. Below is my code. Even though n is defined as 100, and will print '100', when I print cx I only get 20 results.
program energytot
implicit none
integer :: i, n, j, seed(12), k, m
double precision:: sigma, r, epsilon, lx, ly, lz
double precision, dimension(:), allocatable :: cx, cy, cz, dx, dy, dz, x, y, z, LJx, LJy, LJz
allocate(x(n), y(n), z(n), LJx(n), LJy(n), LJz(n), dx(n), dy(n), dz(n))
n = 100 !Number of molecules inside the box
sigma = 4.1
epsilon = 1.7
!Box length with respect to the axis
lx = 15
ly = 15
lz = 15
do i=1,12
seed(i)=1+3
end do
!generate n random numbers for x, y, z
call RANDOM_SEED(PUT = seed)
call random_number(x)
call random_number(y)
call random_number(z)
!convert random numbers into x, y, z coordinates with (0,0,0) as the central point
cx = ((2*x)-1)*(lx*0.5)
cy = ((2*y)-1)*(lx*0.5)
cz = ((2*z)-1)*(lz*0.5)
do j=1,n-1
do k=j+1,n
dx = ABS((cx(j) - cx(j+1)))
LJx = 4 * epsilon * ((sigma/dx(j))**12 - (sigma/dx(j))**6)
dy = ABS((cy(j) - cy(j+1)))
LJy = 4 * epsilon * ((sigma/dy(j))**12 - (sigma/dy(j))**6)
dz = ABS((cz(j) - cz(j+1)))
LJz = 4 * epsilon * ((sigma/dz(j))**12 - (sigma/dz(j))**6)
end do
end do
print*,cx
print*,x
end program energytot
You declare cx (and cy and cz) allocatable, but you do not allocate space for them. Moreover, before you assign a value to variable n, you use it as the number of elements to allocate for your other allocatables. Why do any of those even need to be dynamically allocated in the first place?
I would replace this code ...
integer :: i, n, j, seed(12), k, m
double precision:: sigma, r, epsilon, lx, ly, lz
double precision, dimension(:), allocatable :: cx, cy, cz, dx, dy, dz, x, y, z, LJx, LJy, LJz
allocate(x(n), y(n), z(n), LJx(n), LJy(n), LJz(n), dx(n), dy(n), dz(n))
n = 100 !Number of molecules inside the box
... with this:
integer, parameter :: n = 100
integer :: i, j, seed(12), k, m
double precision :: sigma, r, epsilon, lx, ly, lz
double precision, dimension(n) :: cx, cy, cz, dx, dy, dz, x, y, z, LJx, LJy, LJz
I also observe that in the loop where you compute distances, you loop over variable k, but you do not use its value. As a result, it looks like you compute the same distances many times over.
Related
I faced a problem, where multiplication not null matrix on identity matrix via Lapack gives me a null matrix. All matrices are with positive elements
Dimensions of the matrices:
W = M1*N1
D = N1*N1
M = M1*M1
D M -identity matrices
What I try to do is to get multiplication of D*W'*M , where W' is a transpose of W. Here is a Fortran code using DGEMM operation
PROGRAM SIRT
DOUBLE PRECISION,ALLOCATABLE,DIMENSION(:,:) :: W, D, M, C, MAIN
DOUBLE PRECISION,ALLOCATABLE,DIMENSION(:) :: b, x
DOUBLE PRECISION :: pho
INTEGER, PARAMETER :: M1 = 27000
INTEGER, PARAMETER :: N1 = 1000
INTEGER, PARAMETER :: num_iterations = 200
INTEGER :: i, j, k
allocate(W(1:M1,1:N1))
allocate(D(1:N1,1:N1))
allocate(M(1:M1,1:M1))
allocate(C(1:N1,1:M1))
allocate(MAIN(1:N1,1:M1))
allocate(b(1:M1))
allocate(x(1:N1))
D = 0
M = 0
DO i=1,N1
D(i,i) = 1
END DO
DO i=1,M1
M(i,i) = 1
END DO
OPEN(UNIT=11, FILE="Wmpi.txt")
DO i = 1,M1
READ(11,*) (W(i,j),j=1,N1)
END DO
print *,ANY(W>0)
CLOSE (11, STATUS='KEEP')
OPEN(UNIT=11, FILE="bmpi.txt")
DO i = 1,M1
READ(11,*) b(i)
END DO
CLOSE (11, STATUS='KEEP')
CALL DGEMM('N', 'T', N1, N1, N1, 1.0, D, N1, W, N1, 0.0, C, N1)
print *,ANY(C>0)
CALL DGEMM('N', 'N', N1, M1, M1, 1.0, C, N1, M, M1, 0.0, MAIN, N1)
print *,ANY(MAIN>0)
pho = DLANGE('F', N1, M1, C, N1, x)
END PROGRAM SIRT
The answers of sequential print are True,True,False. So the first multiplication works and I get not null matrix, but in the second all elements are 0.
I know that I don't need matrix multiplication on identity matrices, but I want to figure out what's the problem if I do that.
Another question can I do it memory efficient without temp matrices Main and C?
EDIT
Figured out downloading a resulting matrix after first multiplication that all elements are null. Can't understand why ANY(C>0) is True at second stage.
Firstly note that DGEMM is part of the BLAS library, not LAPACK - the latter is a higher level library.
You have a few problems with your calls to DGEMM
Real constant have kind, and you must provide the correct kind. In
particular DGEMM expects what in old fashioned Fortran was called
Double Precision. The constants you provided are default kind real.
This will cause errors, and I would strongly recommend providing a
kind for ALL real constants if the precision required by your program
is not the default precision
Of the first three integers the first two are always the shape of
the result matrix. C is n1xm1, hence the change to the first call to
DGEMM
The leading dimension of a matrix is 99.999% of the time the first
dimension as allocated/declared - it has nothing to do with the
maths at all, it is purely so DGEMM can work out how the matrix is
laid out in memory. It was wrong for W in the first DGEMM
I would also suggest that as a proper check of the results is fairly easy to do by use of the MATUL intrinsic do that rather than the half hearted one you do in the original, and avoid using unit matrices for tests unless you really want unit matrices, as the high symmetry and large numbers of zeros can easily mask errors.
Pulling this all together, cutting our the irrelevant parts and modifying your program to a slightly more modern form I get
Program sirt
Integer, Parameter :: wp = Kind( 1.0d0 )
Real( wp ),Allocatable,Dimension(:,:) :: w, d, m, c, main, main_compare
Real( wp ),Allocatable,Dimension(:) :: b, x
! Change problem size to something manageable on my laptop
!!$ Integer, Parameter :: m1 = 27000
!!$ Integer, Parameter :: n1 = 1000
Integer, Parameter :: m1 = 2700
Integer, Parameter :: n1 = 100
!!$ Integer :: i
Allocate(w(1:m1,1:n1))
Allocate(d(1:n1,1:n1))
Allocate(m(1:m1,1:m1))
Allocate(c(1:n1,1:m1))
Allocate(main(1:n1,1:m1))
Allocate(main_compare(1:n1,1:m1))
Allocate(b(1:m1))
Allocate(x(1:n1))
d = 0.0_wp
m = 0.0_wp
! Don't trust unit matrices for tests, too much symmetry, too many zeros - use Random numbers
!!$ Do i=1,n1
!!$ d(i,i) = 1.0_wp
!!$ End Do
Call Random_number( d )
!!$
!!$ Do i=1,m1
!!$ m(i,i) = 1.0_wp
!!$ End Do
Call Random_number( m )
! Don't have the file - use random numbers
!!$ open(unit=11, file="wmpi.txt")
!!$ do i = 1,m1
!!$ read(11,*) (w(i,j),j=1,n1)
!!$ end do
!!$ print *,any(w>0)
!!$ close (11, status='keep')
Call random_Number( w )
! 1) Real constant have kind, and you must provide the correct kind
! 2) Of the first three constant the first two are always the shape
! of the result matrix. C is n1xm1, hece the change
! 3) The leading dimension of a matrix is 99.999% of the time
! the first dimension as allocated/declared -it has nothing
! to do with the maths at all. It was wrong for W in the
! first matmul
! 4) DGEMM is part of the BLAS library - not LAPACK
!!$ CALL DGEMM('N', 'T', N1, N1, N1, 1.0, D, N1, W, N1, 0.0, C, N1)
!!$ CALL DGEMM('N', 'N', N1, M1, M1, 1.0, C, N1, M, M1, 0.0, MAIN, N1)
Call dgemm('n', 't', n1, m1, n1, 1.0_wp, d, n1, w, m1, 0.0_wp, c, n1)
Call dgemm('n', 'n', n1, m1, m1, 1.0_wp, c, n1, m, m1, 0.0_wp, main, n1)
! 5) Don't do half hearted checks on the results when proper checks are easy
main_compare = Matmul( Matmul( d, Transpose( w ) ), m )
Write( *, * ) 'Max error ', Maxval( Abs( main - main_compare ) )
! Check we haven't somehow managed all zeros in both matrices ...
Write( *, * ) main( 1:3, 1 )
Write( *, * ) main_compare( 1:3, 1 )
End Program sirt
ian#eris:~/work/stack$ gfortran-8 -std=f2008 -fcheck=all -Wall -Wextra -pedantic -O matmul.f90 -lblas
/usr/bin/ld: warning: libgfortran.so.4, needed by //usr/lib/x86_64-linux-gnu/libopenblas.so.0, may conflict with libgfortran.so.5
ian#eris:~/work/stack$ ./a.out
Max error 3.6379788070917130E-011
38576.055405987529 33186.640731082334 33818.909332709263
38576.055405987536 33186.640731082334 33818.909332709263
ian#eris:~/work/stack$ ./a.out
Max error 2.9103830456733704E-011
34303.739077708480 34227.623080598998 34987.143088270866
34303.739077708473 34227.623080598998 34987.143088270859
ian#eris:~/work/stack$ ./a.out
Max error 3.2741809263825417E-011
35968.603030053979 34778.110740682620 32732.657800858586
35968.603030053971 34778.110740682612 32732.657800858586
ian#eris:~/work/stack$ ./a.out
Max error 2.9103830456733704E-011
31575.076511213174 35879.913361891951 35278.030249048912
31575.076511213178 35879.913361891951 35278.030249048912
I've been given the following function written in pseudocode:
P:
{
int x, y, z;
read (x, y, z);
while (x != y) {
x = x - y;
z = z + y
};
write z;
}
Given that f(x,y,z) is the function calculated by P, I would like to know if the function "g(x,y,z)=1 if f(x,y,z) is not a total function or g(x,y,z)=0 otherwise", is computable.
My first guess is: yes, it is computable (for example for x=y).
Is there a more rigorous general approach to prove that?
P does not change the value of y, and the only way it changes the value of x is to subtract y from x until x = y. If subtracting y from x does not eventually result in x = y, then the loop continues forever. We know that subtracting y from x repeatedly can only cause x = y if initially x = cy for natural numbers c >= 1. So, g(x,y,z) = 1 because f(x,y,z) is not a total function; it is undefined when x != cy for any natural number c >= 1. Even if what you meant is that g(x,y,z) = 1 whenever f(x,y,z) is defined, it is still computable, since g(x,y,z) is the function:
g(x,y,z) = { 1, if x = cy for some natural number c >= 1 }
{ 0, otherwise }
The condition x = cy for some natural number c >= 1 is itself computable since this is equivalent to "x >= y" and "GCD(x, y) = y".
This is My First Logic Programming Language course so this is a really Dumb Question But I cannot for the life of me figure out how does this power predicate work I've tried making a search tree to trace it But I still cannot understand how is it working
mult(_ , 0 ,0).
mult(X , Y, Z):-
Y > 0,
Y1 is Y - 1,
mult(X,Y1,Z1),
Z is Z1 + X.
exp2(_ ,0 , 1).
exp2(X,Y,Z):-
Y > 0,
Y1 is Y - 1,
exp2(X , Y1 , Z1),
mult(X,Z1,Z).
I so far get that I'm going to call the exp2 predicate till I reach the point where the Y is going to be Zero then I'm going to start multiplying from there, but At the last call when it's at exp2(2 , 1 , Z) what is the Z value and how does the predicate work from there?
Thank you very much =)
EDIT: I'm really sorry for the Late reply I had some problems and couldn't access my PC
I'll walk through mult/3 in more detail here, but I'll leave exp2/3 to you as an exercise. It's similar..
As I mentioned in my comment, you want to read a Prolog predicate as a rule.
mult(_ , 0 ,0).
This rule says 0 is the result of multiplying anything (_) by 0. The variable _ is an anonymous variable, meaning it is not only a variable, but you don't care what its value is.
mult(X, Y, Z) :-
This says, Z is the result of multiplying X by Y if....
Y > 0,
Establish that Y is greater than 0.
Y1 is Y - 1,
And that Y1 has the value of Y minus 1.
mult(X, Y1, Z1),
And that Z1 is the result of multiplying X by Y1.
Z is Z1 + X.
And Z is the value of Z1 plus X.
Or reading the mult(X, Y, Z) rule altogether:
Z is the result of multiplying X by Y if Y is greater than 0, and Y1 is Y-1, and Z1 is the result of multiplying X by Y1, and Z is the result of adding Z1 to X.
Now digging a little deeper, you can see this is a recursive definition, as in the multiplication of two numbers is being defined by another multiplication. But what is being multiplied is important. Mathematically, it's using the fact that x * y is equal to x * (y - 1) + x. So it keeps reducing the second multiplicand by 1 and calling itself on the slightly reduced problem. When does this recursive reduction finally end? Well, as shown above, the second rule says Y must be greater than 0. If Y is 0, then the first rule, mult(_, 0, 0) applies and the recursion finally comes back with a 0.
If you are not sure how recursion works or are unfamiliar with it, I highly recommend Googling it to understand it. That is, indeed, a concept that applies to many computer languages. But you need to be careful about learning Prolog via comparison with other languages. Prolog is fundamentally different in it's behavior from procedural/imperative languages like Java, Python, C, C++, etc. It's best to get used to interpreting Prolog rules and facts as I have described above.
Say you want to compute 2^3 as assign result to R.
For that you will call exp2(2, 3, R).
It will recursively call exp2(2, 2, R1) and then exp2(2, 1, R2) and finally exp(2, 0, R3).
At this point exp(_, 0, 1) will match and R3 will be assigned to 1.
Then when call stack unfolds 1 will be multiplied by 2 three times.
In Java this logic would be encoded as follows. Execution would go pretty much the same route.
public static int Exp2(int X, int Y) {
if (Y == 0) { // exp2(_, 0, 1).
return 1;
}
if (Y > 0) { // Y > 0
int Y1 = Y - 1; // Y1 is Y - 1
int Z1 = Exp2(X, Y1); // exp2(X, Y1, Z1);
return X * Z1; // mult(X, Z1, Z).
}
return -1; // this should never happen.
}
I have a high res binary image which looks something like:
I'm trying to compute the major axis which should be slightly rotated to the right and eventually get the axis of orientation of the object
A post here (in matlab) suggests a way of doing this is computing the covariance matrix for the datapoints and finding their eigenvalues/eigenvectors
I am trying to implement something similar in R
%% MATLAB CODE Calculate axis and draw
[M N] = size(Ibw);
[X Y] = meshgrid(1:N,1:M);
%Mass and mass center
m = sum(sum(Ibw));
x0 = sum(sum(Ibw.*X))/m;
y0 = sum(sum(Ibw.*Y))/m;
#R code
d = dim(im)
M = d[1]
N = d[2]
t = meshgrid(M,N)
X = t[[2]]
Y = t[[1]]
m = sum(im);
x0 = sum(im %*% X)/m;
y0 = sum(im %*% Y)/m;
meshgrid <-function(r,c){
return(list(R=matrix(rep(1:r, r), r, byrow=T),
C=matrix(rep(1:c, c), c)))
}
However, computing m , x0 and y0 takes too long in R.
Does anyone know of an implementation in R?
Computing the variance matrix directly, with var, takes 1/3 of a second.
# Sample data
M <- 2736
N <- 3648
im <- matrix( FALSE, M, N );
y <- as.vector(row(im))
x <- as.vector(col(im))
im[ abs( y - M/2 ) < M/3 & abs( x - N/2 ) < N/3 ] <- TRUE
#image(im)
theta <- runif(1, -pi/12, pi/12)
xy <- cbind(x+1-N/2,y+1-M/2) %*% matrix(c( cos(theta), sin(theta), -sin(theta), cos(theta) ), 2, 2)
#plot(xy[,1]+N/2-1, xy[,2]+M/2-1); abline(h=c(1,M),v=c(1,N))
f <- function(u, lower, upper) pmax(lower,pmin(round(u),upper))
im[] <- im[cbind( f(xy[,2] + M/2 - 1,1,M), f(xy[,1] + N/2 - 1,1,N) )]
image(1:N, 1:M, t(im), asp=1)
# Variance matrix of the points in the rectangle
i <- which(im)
V <- var(cbind( col(im)[i], row(im)[i] ))
# Their eigenvectors
u <- eigen(V)$vectors
abline( M/2-N/2*u[2,1]/u[1,1], u[2,1]/u[1,1], lwd=5 )
abline( M/2-N/2*u[2,2]/u[1,2], u[2,2]/u[1,2] )
Try replacing the default Rblas.dll with a suitable one from this link.
Problem: Given a polynomial of degree n (with coefficients a0 through an-1) that is guaranteed to be increasing from x = 0 to xmax, what is the most efficient algorithm to find the first m points with equally-spaced y values (i.e. yi - yi-1 == c, for all i)?
Example: If I want the spacing to be c = 1, and my polynomial is f(x) = x^2, then the first three points would be at y=1 (x=1), y=2 (x~=1.4142), and y=3 (x~=1.7321).
I'm not sure if it will be significant, but my specific problem involves the cube of a polynomial with given coefficients. My intuition tells me that the most efficient solution should be the same, but I'm not sure.
I'm encountering this working through the problems in the ACM's problem set for the 2012 World Finals (problem B), so this is mostly because I'm curious.
Edit: I'm not sure if this should go on the Math SE?
You can find an X for a given Y using a binary search. It's logarithmic time complexity, proportional to the size of the range of x values, divided by your error tolerance.
def solveForX(polyFunc, minX, maxX, y, epsilon):
midX = (minX + maxX) / 2.0
if abs(polyFunc(midX) - y) < epsilon:
return midX
if polyFunc(midX) > y:
return solveForX(polyFunc, minX, midX, y, epsilon)
else:
return solveForX(polyFunc, midX, maxX, y, epsilon)
print solveForX(lambda x: x*x, 0, 100, 2, 0.01)
output:
1.416015625
Edit: to expand on an idea in the comments, if you know you will be searching for multiple X values, it's possible to narrow down the [minX, maxX] search range.
def solveForManyXs(polyFunc, minX, maxX, ys, epsilon):
if len(ys) == 0:
return []
midIdx = len(ys) / 2
midY = ys[midIdx]
midX = solveForX(polyFunc, minX, maxX, midY, epsilon)
lowYs = ys[:midIdx]
highYs = ys[midIdx+1:]
return solveForManyXs(polyFunc, minX, midX, lowYs, epsilon) + \
[midX] + \
solveForManyXs(polyFunc, midX, maxX, highYs, epsilon)
ys = [1, 2, 3]
print solveForManyXs(lambda x: x*x, 0, 100, ys, 0.01)
output:
[1.0000884532928467, 1.41448974609375, 1.7318960977718234]