Random Number in Fortran - random

I am learning Fortran and I would like to make a little game to practice the input. The objective is to find the good number, who is a random one. I made a code to generate the number but my problem is that, the result is a random number, but it's always the same.
For example, when I execute the code 3 times, it print 21 the three times.
Here is my code :
program find_good_number
integer :: random_number
integer :: seed
seed = 123456789
call srand(seed)
random_number = int(rand(0)*100)
print*, random_number
end program find_good_number
Can you help me please ?
Thanks

Using GNU Fortran 10.3 with the standard intrinsics, and asking the Fortran runtime library to peek the seed, it seems that every invocation of the program does result in a different serie of random numbers. So that would be OK for the sort of application you have in mind.
Using this code:
Program TestRandom1
implicit none
integer :: randomSeedSize
integer :: count = 3
integer :: k = 0
real :: rx = 0.0
call random_seed(size = randomSeedSize)
write (*,'(a,i4)') 'size of random seed (in integers): ', &
randomSeedSize
call random_seed() ! use system-provided seed
do k = 1, count
call random_number(rx)
write (*, '(a,f10.8)') 'rx = ', rx
end do
End Program TestRandom1
Context:
$
$ uname -s -m -r
Linux 5.13.9-100.fc33.x86_64 x86_64
$
$ gfortran --version
GNU Fortran (GCC) 10.3.1 20210422 (Red Hat 10.3.1-1)
Copyright (C) 2020 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
$
Testing:
$
$ random1.x
size of random seed (in integers): 8
rx = 0.23642105
rx = 0.39820033
rx = 0.62709534
$
$ random1.x
size of random seed (in integers): 8
rx = 0.84118658
rx = 0.45977014
rx = 0.09513164
$
$ random1.x
size of random seed (in integers): 8
rx = 0.33584720
rx = 0.86550051
rx = 0.26546007
$
A seed size of 8*32 = 256 bits looks consistent with the xoshiro256 algorithm mentioned here in the GNU Fortran documentation.

Related

Generate N points following a a normal distribution

Does anyone know how could Ieasily generate N random numbers following a normal distribution, with a mean mu and a standard deviation sigma, in Fortran 90?
Or even the logic process to produce the N values?
(Not a fortran-programmer)
The standard / your compiler defines some random-function for uniform random-values within (0,1). (example: Gnu Fortran Docs)
Now just select one of the well-known and specialized sampling-algorithms designed for gaussian-sampling listed at wikipedia.
The most well-known:
Box-Muller transform
Polar-method
Ziggurat
PROGRAM XX
IMPLICIT NONE
REAL :: Std_Dev = .1
REAL :: Variance
INTEGER :: N = 100
REAL, DIMENSION(100) :: RData
REAL< PARAMETER :: Steigler = /(1.0/6.28)/
Variance = Std_Dev**2 !variance from the std deviation maybe with some steigler or 2.88 distribution
CALL RANDOM_NUMBER(RData) !distributed as 0-1
RData = RData * Variance !RData is now with variance
RData = RData + Mean !RData is now with variance and mean
WRITE(*,*)'Data=',RData
!... Insert your statistics code here to debug it...
END PROGRAM

Sum all numbers from one to a billion in Haskell

Currently i'm catching up on Haskell, and I'm super impressed so far. As a super simple test I wrote a program which computes the sum up till a billion. In order to avoid list creation, I wrote a function which should be tail recursive
summation start upto
| upto == 0 = start
| otherwise = summation (start+upto) (upto-1)
main = print $ summation 0 1000000000
running this with -O2 I get a runtime of about ~20sec on my machine, which kind of surprised me, since I thought the compiler would be more optimising. As a comparison I wrote a simple c++ program
#include <iostream>
int main(int argc, char *argv[]) {
long long result = 0;
int upto = 1000000000;
for (int i = 0; i < upto; i++) {
result += i;
}
std::cout << result << std::end;
return 0;
}
compiling with clang++ without optimisation the runtime is ~3secs. So I was wondering why my Haskell solution is so slow. Has anybody an idea?
On OSX:
clang++ --version:
Apple LLVM version 7.0.2 (clang-700.1.81)
Target: x86_64-apple-darwin15.2.0
Thread model: posix
ghc --version:
The Glorious Glasgow Haskell Compilation System, version 7.10.3
Adding a type signature dropped my runtime from 14.35 seconds to 0.27. It is now faster than the C++ on my machine. Don't rely on type-defaulting when performance matters. Ints aren't preferable for, say, modeling a domain in a web application, but they're great if you want a tight loop.
module Main where
summation :: Int -> Int -> Int
summation start upto
| upto == 0 = start
| otherwise = summation (start+upto) (upto-1)
main = print $ summation 0 1000000000
[1 of 1] Compiling Main ( code/summation.hs, code/summation.o )
Linking bin/build ...
500000000500000000
14.35user 0.06system 0:14.41elapsed 100%CPU (0avgtext+0avgdata 3992maxresident)k
0inputs+0outputs (0major+300minor)pagefaults 0swaps
Linking bin/build ...
500000000500000000
0.27user 0.00system 0:00.28elapsed 98%CPU (0avgtext+0avgdata 3428maxresident)k
0inputs+0outputs (0major+171minor)pagefaults 0swaps
Skip the strike-out unless you want to see the unoptimized (non -O2) view.
Lets look at the evaluation:
summation start upto
| upto == 0 = start
| otherwise = summation (start+upto) (upto-1)
main = print $ summation 0 1000000000
-->
summation 0 1000000000
-->
summations (0 + 1000000000) 999999999
-->
summation (0 + 1000000000 + 999999999) 999999998
-->
summation (0 + 1000000000 + 999999999 + 999999998) 999999997
EDIT: I didn't see that you had compiled with -O2 so the above isn't occuring. The accumulator, even without any strictness annotations, suffices most of the time with proper optimization levels.
Oh no! You are storing one billion numbers in a big thunk that you aren't evaluating! Tisk! There are lots of solutions using accumulators and strictness - it seems like most stackoverflow answers with anything near this question will suffice to teach you those in addition to library functions, like fold{l,r}, that help you avoid writing your own primitive recursive functions. Since you can look around and/or ask about those concepts I'll cut to the chase with this answer.
If you really want to do this the correct way then you'd use a list and learn that Haskell compilers can do "deforestation" which means the billion-element list is never actually allocated:
main = print (sum [0..1000000000])
Then:
% ghc -O2 x.hs
[1 of 1] Compiling Main ( x.hs, x.o )
Linking x ...
% time ./x
500000000500000000
./x 16.09s user 0.13s system 99% cpu 16.267 total
Cool, but why 16 seconds? Well by default those values are Integers (GMP integers for the GHC compiler) and that's slower than a machine Int. Lets use Int!
% cat x.hs
main = print (sum [0..1000000000] :: Int)
tommd#HalfAndHalf /tmp% ghc -O2 x.hs && time ./x
500000000500000000
./x 0.31s user 0.00s system 99% cpu 0.311 total

Other ways to get a random number in Lua

I'm looking for an alternative way to get a random number in Lua that is between a minimum and a maximum number without using math.random(). Is there any way? It doesn't have to be a simple method.
Like the comments have hinted at, on Unix-like systems you can read bytes from /dev/random or /dev/urandom, and create a random number from them.
urand = assert (io.open ('/dev/urandom', 'rb'))
rand = assert (io.open ('/dev/random', 'rb'))
function RNG (b, m, r)
b = b or 4
m = m or 256
r = r or urand
local n, s = 0, r:read (b)
for i = 1, s:len () do
n = m * n + s:byte (i)
end
return n
end
As an extension to this answer, and for fun, I've authored a very tiny module, randbytes, so that future readers may play around with the /dev/random and /dev/urandom interfaces in a simple manner. Here's a quick run down.
Install with luarocks or get it manually.
$ luarocks install randbytes
Require the module, or file.
$ lua
> randbytes = require 'randbytes'
And then grab some bytes.
> print (randbytes (8))
For now, I've cleaned up and included the very simple generation algorithm shown above, for generating basic random numbers.
> print (randbytes:urandom (16))
You can build on top of the basic interface to implement your own algorithms. Read the documentation for a full list of methods, and settings.

Does gfortran take advantage of DO CONCURRENT?

I'm currently using gfortran 4.9.2 and I was wondering if the compiler actually know hows to take advantage of the DO CONCURRENT construct (Fortran 2008). I know that the compiler "supports" it, but it is not clear what that entails. For example, if automatic parallelization is turned on (with some number of threads specified), does the compiler know how to parallelize a do concurrent loop?
Edit: As mentioned in the comment, this previous question on SO is very similar to mine, but it is from 2012, and only very recent versions of gfortran have implemented the newest features of modern Fortran, so I thought it was worth asking about the current state of the compiler in 2015.
Rather than explicitly enabling some new functionality, DO CONCURRENT in gfortran seems to put restrictions on the programmer in order to implicitly allow parallelization of the loop when required (using the option -ftree-parallelize-loops=NPROC).
While a DO loop can contain any function call, the content of DO CONCURRENT is restricted to PURE functions (i.e., having no side effects). So when one attempts to use, e.g., RANDOM_NUMBER (which is not PURE as it needs to maintain the state of the generator) in DO CONCURRENT, gfortran will protest:
prog.f90:25:29:
25 | call random_number(x)
| 1
Error: Subroutine call to intrinsic ‘random_number’ in DO CONCURRENT block at (1) is not PURE
Otherwise, DO CONCURRENT behaves as normal DO. It only enforces use of parallelizable code, so that -ftree-parallelize-loops=NPROC succeeds. For instance, with gfortran 9.1 and -fopenmp -Ofast -ftree-parallelize-loops=4, both the standard DO and the F08 DO CONCURRENT loops in the following program run in 4 threads and with virtually identical timing:
program test_do
use omp_lib, only: omp_get_wtime
integer, parameter :: n = 1000000, m = 10000
real, allocatable :: q(:)
integer :: i
real :: x, t0
allocate(q(n))
t0 = omp_get_wtime()
do i = 1, n
q(i) = i
do j = 1, m
q(i) = 0.5 * (q(i) + i / q(i))
end do
end do
print *, omp_get_wtime() - t0
t0 = omp_get_wtime()
do concurrent (i = 1:n)
q(i) = i
do j = 1, m
q(i) = 0.5 * (q(i) + i / q(i))
end do
end do
print *, omp_get_wtime() - t0
end program test_do

BLAS subroutines dgemm, dgemv and ddot doesn't work with scalars?

I have a Fortran subroutine which uses BLAS' subroutines dgemm, dgemv and ddot, which calculate matrix * matrix, matrix * vector and vector * vector. I have m * m matrices and m * 1 vectors. In some cases m=1. It seems that those subroutines doesn't work well in those cases. They doesn't give errors, but there seems to be some numerical unstability in results. So I have to write something like:
if(m>1) then
vtuni(i,t) = yt(i,t) - ct(i,t) - ddot(m, zt(i,1:m,(t-1)*tvar(3)+1), 1, arec, 1)
else
vtuni(i,t) = yt(i,t) - ct(i,t) - zt(i,1,(t-1)*tvar(3)+1)*arec(1)
So my actual question is, am I right that those BLAS' subroutines doesn't work properly when m=1 or is there just something wrong in my code? Can the compiler affect this? I'm using gfortran.
BLAS routines are supposed to behave correctly with objects of size 1. I don't think it can depend on compiler, but it could possible depend on the implementation of BLAS you're relying on (though I'd consider it a bug of the implementation). The reference (read: not target-optimised) implementation of BLAS, which can be found on Netlib, handles that case fine.
I've done some testing on both arrays of size 1, and size-1 slices of larger array (as in your own code), and they both work fine:
$ cat a.f90
implicit none
double precision :: u(1), v(1)
double precision, external :: ddot
u(:) = 2
v(:) = 3
print *, ddot(1, u, 1, v, 1)
end
$ gfortran a.f90 -lblas && ./a.out
6.0000000000000000
$ cat b.f90
implicit none
double precision, allocatable :: u(:,:,:), v(:)
double precision, external :: ddot
integer :: i, j
allocate(u(3,1,3),v(1))
u(:,:,:) = 2
v(:) = 3
i = 2
j = 2
print *, ddot(1, u(i,1:1,j), 1, v, 1)
end
$ gfortran b.f90 -lblas && ./a.out
6.0000000000000000
Things I'd consider to debug this problem further:
Check that your ddot definition is correct
Substitute the reference BLAS to your optimised one, to check if it changes anything (you can just compile and link in the ddot.f file I linked to earlier in my answer)

Resources