Monte Carlo pi method - algorithm

I try to calculate Monte Carlo pi function in R. I have some problems in the code.
For now I write this code:
ploscinaKvadrata <- 0
ploscinaKroga <- 0
n = 1000
for (i in i:n) {
x <- runif(1000, min= -1, max= 1)
y <- runif(1000, min= -1, max= 1)
if ((x^2 + y^2) <= 1) {
ploscinaKroga <- ploscinaKroga + 1
} else {
ploscinaKvadrata <- ploscinaKvadrata + 1
}
izracunPi = 4* ploscinaKroga/ploscinaKvadrata
}
izracunPi
This is not working, but I don't know how to fix it.
I would also like to write a code to plot this (with circle inside square and with dots).

Here is a vectorized version (and there was also something wrong with your math)
N <- 1000000
R <- 1
x <- runif(N, min= -R, max= R)
y <- runif(N, min= -R, max= R)
is.inside <- (x^2 + y^2) <= R^2
pi.estimate <- 4 * sum(is.inside) / N
pi.estimate
# [1] 3.141472
As far as plotting the points, you can do something like this:
plot.new()
plot.window(xlim = 1.1 * R * c(-1, 1), ylim = 1.1 * R * c(-1, 1))
points(x[ is.inside], y[ is.inside], pch = '.', col = "blue")
points(x[!is.inside], y[!is.inside], pch = '.', col = "red")
but I'd recommend you use a smaller N value, maybe 10000.

This is a fun game -- and there are a number of versions of it floating around the web. Here's one I hacked from the named source (tho' his code was somewhat naive).
from http://giventhedata.blogspot.com/2012/09/estimating-pi-with-r-via-mcs-dart-very.html
est.pi <- function(n){
# drawing in [0,1] x [0,1] covers one quarter of square and circle
# draw random numbers for the coordinates of the "dart-hits"
a <- runif(n,0,1)
b <- runif(n,0,1)
# use the pythagorean theorem
c <- sqrt((a^2) + (b^2) )
inside <- sum(c<1)
#outside <- n-inside
pi.est <- inside/n*4
return(pi.est)
}
Typo 'nside' to 'inside'

Related

JAGS - pow function does not work properly in mixture model with label switching

I am fitting a mixture model to estimate the average of a trait in each of 3 populations.
I have a label switching issue and I am trying to compute the distance between the observed and expected numbers of individuals of each genotype in each population to relabel population clusters. Below is a reproducible example.
For some reasons, JAGS does not compute the square values for distance properly. The corresponding line in code below is: pow(DistNumPerClust[k,j], 2))
Hence, the output matrix results$mean$dist is different from the matrix, results$mean$DistNumPerClust^2, computed a posteriori.
Would anyone know a way to solve this?
library(R2jags)
library(runjags)
library(dirmult)
set.seed(123)
############################
## Simulation of the data ##
############################
npop=3
ngeno=2
freqbalance=1
nsamplesizeperpop <- 100
freqMLG <- t(rdirichlet(n=npop, alpha=rep(freqbalance, ngeno)))
samplesizegenoperpop <- sweep(freqMLG, 1, nsamplesizeperpop, "*")
## Compute membership (probability that a genotype comes from pop 1, 2 or 3)
## Genotype as rows and populations as columns
membership <- sweep(freqMLG, 1, rowSums(freqMLG), "/")
# Parameters for simulations
nind=90
N = npop*nind # nb of observations
clust <- rep(1:npop, each=N/npop)
geno <- c()
for (i in 1:N){
geno <- c(geno, sum(rmultinom(n=1, size=1, prob=freqMLG[, clust[i]])*1:ngeno))
}
numgeno <- as.numeric(table(geno))
## Multiply membership probabilities by sample size for each genotype
ExpNumPerClust <- sweep(membership, 1, numgeno, "*")
muOfClustsim <- c(1, 20, 50) # vector of population means
sigma <- 1.5 # residual sd
(tausim <- 1/(sigma*sigma)) # precision
# parameters are treated as data for the simulation step
data <- list(N=N, npop=npop, ngeno=ngeno, geno=geno, muOfClustsim=muOfClustsim, tausim=tausim, samplesizegenoperpop=samplesizegenoperpop)
## JAG model
txtstring <- "
data{
# Likelihood:
for (i in 1:N){
ysim[i] ~ dnorm(eta[i], tausim) # tau is precision (1 / variance)
eta[i] <- muOfClustsim[clust[i]]
clust[i] ~ dcat( pClust[geno[i], 1:npop] )
}
for (k in 1:ngeno){
pClust[k, 1:npop] ~ ddirch( samplesizegenoperpop[k,] )
}
}
model{
fake <- 0
}
"
# Simulate with jags
out <- run.jags(txtstring, data = data, monitor=c("ysim"), sample=1, n.chains=1, summarise=FALSE)
# reformat the outputs
ysim <- coda::as.mcmc(out)[1:N]
## Estimation model
bayes.mod <- function(){
# Likelihood:
for (i in 1:N){
ysim[i] ~ dnorm(eta[i], tau) # tau is precision (1 / variance)
eta[i] <- beta[clust[i]]
clust[i] ~ dcat( pClust[geno[i], 1:npop] )
}
for (k in 1:ngeno){
## pClust membership estimates
pClust[k, 1:npop] ~ ddirch( samplesizegenoperpop[k,] )
}
for (k in 1:ngeno){
for (j in 1:npop){
# problem of label switching: try to compute the distance between ObsNumPerClust and ExpNumPerClust (i.e. between observed and expected number of individuals of each genotype in each population)
ObsNumPerClust[k,j] <- pClust[k, j] * numgeno[k]
DistNumPerClust[k,j] <- ObsNumPerClust[k,j] - ExpNumPerClust[k,j]
dist[k,j] <- pow(DistNumPerClust[k,j], 2)
}
}
# Priors
beta ~ dmnorm(mu, sigma.inv)
mu ~ dmnorm(m, V)
sigma.inv ~ dwish(R, K)
tau ~ dgamma(0.01, 0.01)
# parameters transformations
sig <- sqrt(1/ tau)
}
m = rep(1, npop)
V = diag(rep(0.01, npop))
R = diag(rep(0.1, npop))
K = npop
## Input variables
sim.dat.jags<-list("ysim","N","npop", "ngeno", "geno","m","V","R", "K", "samplesizegenoperpop","numgeno","ExpNumPerClust")
## Variables to monitor
bayes.mod.params <- c("beta","tau","sig","DistNumPerClust","dist")
## Starting values
init1 <- list(beta = c(0, 100, 1000), tau = 1)
bayes.mod.inits <- list(init1)
## Run model
bayes.mod.fit<-jags(data = sim.dat.jags, inits = bayes.mod.inits, parameters.to.save = bayes.mod.params, n.chains=1, n.iter=101000, n.burnin=1000, n.thin=200, model.file = bayes.mod)
results <- print(bayes.mod.fit)
results$mean$dist
results$mean$DistNumPerClust^2
It seems that you expect that the mean of a transformed set of values will give the same result as transforming the mean of the same set of values. But this is not the case - for example:
values <- c(1,2,3,6,8,20)
mean(values)^2
mean(values^2)
Are not the same thing.
The equivalent is happening in your model - you calculate dist[k,j] as the square of DistNumPerClust[k,j] and then summarise to a mean of dist, and expect this to be the same as the square of the mean of DistNumPerClust[k,j]. Or in a simpler example:
library('runjags')
X <- 1:100
Y <- rnorm(length(X), 2*X + 10, 1)
model <- "model {
for(i in 1 : N){
Y[i] ~ dnorm(true.y[i], precision);
true.y[i] <- (m * X[i]) + c
}
m ~ dunif(-1000,1000)
c ~ dunif(-1000,1000)
precision ~ dexp(1)
p2 <- precision^2
}"
data <- list(X=X, Y=Y, N=length(X))
results <- run.jags(model=model, monitor=c("m", "c", "precision", "p2"),
data=data, n.chains=2)
results
More specifically, these should not be expected to be the same:
summary(results)['p2','Mean']
summary(results)['precision','Mean']^2
If you want to calculate the same thing you can extract the full chain of values as an MCMC object and do your transformation on these:
p <- combine.mcmc(results,vars='precision')
p2 <- combine.mcmc(results,vars='p2')
mean(p^2)
mean(p2)
mean(p)
mean(sqrt(p2))
Now everything is equivalent.
Matt

Comparing Record Results and Double For Loop

I have a double loop that I not only don't like, but would take 14 days to run on my computer since it is going over 3200 records and 1090 variables at about .12 per iteration.
A smaller reproducible bit. It simply checks how many numbers are in the same column between two records, not including NA's. Then it attaches the results to the original data frame.
y <- data.frame(c(1,2,1,NA,NA),c(3,3,3,4,NA),c(5,4,5,7,7),c(7,8,7,9,10))
resultdf <- NULL
for(i in 1:nrow(y))
{
results <- NULL
for(j in 1:nrow(y))
{
results <- c(results,sum((y[i,]==y[j,]),na.rm=TRUE))
}
resultdf <- cbind(resultdf,results)
}
y <- cbind(y,resultdf)
I have repeat calculations that could possibly be avoided leaving about 7 days.
If I understand correctly, a few apply functions are in C that might be faster. I haven't been able to get any to work though. I'm also curious if there is a package that would run faster. Can anyone help speed up the calculation?
Thank you!
I have created data to your specifications, and using #BenBolker's suggestion about using a matrix:
> y <- matrix(sample(c(1:9, NA), 3200 * 1090, replace = TRUE),
+ nrow = 3200, ncol = 1090)
and compared the computation times for three different implementations:
f1 was suggested by #Andrei:
> f1 <- function(y)apply(y, 1, function(r1)
+ apply(y, 1, function(r2)sum(r1==r2, na.rm=TRUE)))
> system.time(r1 <- f1(y))
user system elapsed
523.51 0.77 528.73
f2 was suggested by #VincentZoonekynd:
> f2 <- function(y) {
+ f <- function(i,j) sum(y[i,] == y[j,], na.rm=TRUE)
+ d <- outer( 1:nrow(y), 1:nrow(y), Vectorize(f) )
+ return(d)
+ }
> system.time(r2 <- f2(y))
user system elapsed
658.94 1.96 710.67
f3 is a double loop over the upper triangle as suggested by #BenBolker. It is also a bit more efficient than your OP in that it pre-allocates the output matrix:
> f3 <- function(y) {
+ result <- matrix(NA, nrow(y), nrow(y))
+ for (i in 1:nrow(y)) {
+ row1 <- y[i, ]
+ for (j in i:nrow(y)) {
+ row2 <- y[j, ]
+ num.matches <- sum(row1 == row2, na.rm = TRUE)
+ result[i, j] <- num.matches
+ result[j, i] <- num.matches
+ }
+ }
+ return(result)
+ }
> system.time(r3 <- f3(y))
user system elapsed
167.66 0.08 168.72
So the double loop is the fastest of all three, although not as elegant and compact as the other two answers.
Here is another solution, using outer.
f <- function(i,j) sum(y[i,] == y[j,], na.rm=TRUE)
d <- outer( 1:nrow(y), 1:nrow(y), Vectorize(f) )
Indeed, you can use apply function. Given the earlier hint that a matrix works faster, I would try:
ym <- as.matrix(y)
resultdf <- apply(ym, 1, function(r1) apply(ym, 1, function(r2) sum(r1==r2, na.rm=TRUE)))
You can get rid of the inner loop (using the y and f3 from #flodel's answer):
ty <- t(y)
ix <- rep(1:nrow(y),each = ncol(y))
f4 <- function(y){
result <- matrix(0L, nrow(y), nrow(y))
for(r in 1:nrow(y))
result[r,] <- rowsum(as.numeric(ty == y[r,]), ix, na.rm = T)
result
}
> system.time(out <- f4(y))
user system elapsed
52.616 21.061 74.000
> system.time(out <- f3(y))
user system elapsed
244.751 0.136 244.954
>
It actually does an extra job of computing twice the same thing, but is still 5 times faster. You can make it yet another 4 times faster by using the inner workings of rowsum. See this question for an example.

Make this loop faster in R

How can I speed up the following (noob) code:
#"mymatrix" is the matrix of word counts (docs X terms)
#"tfidfmatrix" is the transformed matrix
tfidfmatrix = Matrix(mymatrix, nrow=num_of_docs, ncol=num_of_words, sparse=T)
#Apply a transformation on each row of the matrix
for(i in 1:dim(mymatrix)[[1]]){
r = mymatrix[i,]
s = sapply(r, function(x) ifelse(x==0, 0, (1+log(x))*log((1+ndocs)/(1+x)) ) )
tfmat[i,] = s/sqrt(sum(s^2))
}
return (tfidfmatrix)
Problem is that the matrices I am working on are fairly large (~40kX100k), and this code is very slow.
The reason I am not using "apply" (instead of using a for loop and sapply) is that apply is going to give me the transpose of the matrix I want - I want num_of_docs X num_of_words, but apply will give me the transpose. I will then have to spend more time computing the transpose and re-allocating it.
Any thoughts on making this faster?
Thanks much.
Edit : I have found that the suggestions below greatly speed up my code (besides making me feel stupid). Any suggestions on where I can learn to write "optimized" R code from?
Edit 2: OK, so something is not right. Once I do s.vec[!is.finite(s.vec)] <- 0 every element of s.vec is being set to 0. Just to re-iterate my original matrix is a sparse matrix containing integers. This is due to some quirk of the Matrix package I am using. When I do s.vec[which(s.vec==-Inf)] <- 0 things work as expected. Thoughts?
As per my comment,
#Slightly larger example data
mymatrix <- matrix(runif(10000),nrow=10)
mymatrix[sample(10000,100)] <- 0
tfmat <- matrix(nrow=10, ncol=1000)
ndocs <- 1
justin <- function(){
s.vec <- ifelse(mymatrix==0, 0, (1 + log(mymatrix)) * log((1 + ndocs)/(1 + mymatrix)))
tfmat.vec <- s.vec/sqrt(rowSums(s.vec^2))
}
joran <- function(){
s.vec <- (1 + log(mymatrix)) * log((1 + ndocs)/(1 + mymatrix))
s.vec[!is.finite(s.vec)] <- 0
tfmat.vec <- s.vec/sqrt(rowSums(s.vec^2))
}
require(rbenchmark)
benchmark(justin(),joran(),replications = 1000)
test replications elapsed relative user.self sys.self user.child sys.child
2 joran() 1000 0.940 1.00000 0.842 0.105 0 0
1 justin() 1000 2.786 2.96383 2.617 0.187 0 0
So it's around 3x faster or so.
not sure what ndocs is, but ifelse is already vectorized, so you should be able to use the ifelse statement without walking through the matrix row by row and sapply along the row. The same can be said for the final calc.
However, you haven't given a complete example to replicate...
mymatrix <- matrix(runif(100),nrow=10)
tfmat <- matrix(nrow=10, ncol=10)
ndocs <- 1
s.vec <- ifelse(mymatrix==0, 0, 1 + log(mymatrix)) * log((1 + ndocs)/(1 + mymatrix))
for(i in 1:dim(mymatrix)[[1]]){
r = mymatrix[i,]
s = sapply(r, function(x) ifelse(x==0, 0, (1+log(x))*log((1+ndocs)/(1+x)) ) )
tfmat[i,] <- s
}
all.equal(s.vec, tfmat)
so the only piece missing is the rowSums in your final calc.
tfmat.vec <- s.vec/sqrt(rowSums(s.vec^2))
for(i in 1:dim(mymatrix)[[1]]){
r = mymatrix[i,]
s = sapply(r, function(x) ifelse(x==0, 0, (1+log(x))*log((1+ndocs)/(1+x)) ) )
tfmat[i,] = s/sqrt(sum(s^2))
}
all.equal(tfmat, tfmat.vec)

Problem with 64 bit R's optim under windows 7

I am currently in the final phase of developing my first R package which is supposed to fit multinomial processing tree (MPT) models (see its homepage for the current version). The model fitting is achieved by R's optim function.
Today I was playing around with it for the first time on a windows 7 machine and noticed something really strange: optim does not converge succesfully when using the 64 bit version of R. This looks like a bug to me (especially as nlminb converges for both R versions). As optim is at the core of my package, any help on this question is greatly appreciated.
Here comes a minimally reproducible example (normally the model is specified via expressions and not specified in the objective function, but for simplicity I put everything in the objective function here):
# The objective function:
llk.tree <- function (Q, data)
{
p <- Q[1]
q <- Q[2]
r <- Q[3]
e <- rep(NA,4)
e[1] <- p * q * r
e[2] <- p * q * (1-r)
e[3] <- p * (1-q) * r
e[4] <- p * (1-q) * (1-r) + (1-p)
llk <- sum(data * log(e))
if (is.na(llk))
llk <- -1e+19
if (llk == -Inf)
llk <- -1e+19
return(-llk)
}
# The call to optim:
optim(runif(3, 0.1, 0.9), llk.tree, data = c(24, 65, 30, 61), method = "L-BFGS-B", lower = rep(0, 3), upper = rep(1, 3))
This example reproduces an example from a seminal paper on MPTs by Riefer & Batchelder, namely row 1 in Table 1 p. 327 (expected parameter values would be p = 1, q = .49 and r = .30).
Running it on a 32 bit R always gives the correct result (tried with versions 2.12.2 and 2.13.0):
$par
[1] 1.0000000 0.4944449 0.3000001
$value
[1] 234.7110
$counts
function gradient
11 11
$convergence
[1] 0
$message
[1] "CONVERGENCE: REL_REDUCTION_OF_F <= FACTR*EPSMCH"
(Note that count may differ due to random starting values.)
Running it on a 64 bit R on the other hand may produce such a (wrong) result:
$par
[1] 0.8668081 0.6326655 0.1433857
$value
[1] 257.7328
$counts
function gradient
3 3
$convergence
[1] 0
$message
[1] "CONVERGENCE: REL_REDUCTION_OF_F <= FACTR*EPSMCH"
The returned value of the objective function and the returned parameter values differ on each run, but count is always 3!
Note that running nlminb produces correct results on 32 bit and 64 bit R:
nlminb(runif(3, 0.1, 0.9), llk.tree, data = c(24, 65, 30, 61), lower = 0, upper = 1)
$par
[1] 1.0000000 0.4944445 0.3000000
$objective
[1] 234.711
$convergence
[1] 0
$iterations
[1] 14
$evaluations
function gradient
19 55
$message
[1] "relative convergence (4)"
One final note: We have examples (this is our simplest example model) that worked on 64 bit R and optim but more examples (like the one shown here) did not work.
And count is always 3...
EDIT:
When fixing the starting values (thanks to Joshua Ulrich) optim does not move away from those fixed values under 64 bit R:
optim(c(0.5, 0.5, 0.5), llk.tree, data = c(24, 65, 30, 61), method = "L-BFGS-B", lower = rep(0, 3), upper = rep(1, 3))
$par
[1] 0.5 0.5 0.5
$value
[1] 276.1238
$counts
function gradient
3 3
$convergence
[1] 0
$message
[1] "CONVERGENCE: REL_REDUCTION_OF_F <= FACTR*EPSMCH"
We did some more testing today and found the same problem as described in the question under Linux using 64 bit R.
However, thanks to Joachim Vandekerckhove for this ingenious idea, we tried a simple change that solved the problem (although the issue remains suspicious). At the end of the objective function if llk is Inf we set it to an extremely high value (was 1e19).
Using a smaller value (e.g., 1e10) removes the problem on 64 bit machines (so far tested on Linux):
llk.tree <- function (Q, data)
{
p <- Q[1]
q <- Q[2]
r <- Q[3]
e <- rep(NA,4)
e[1] <- p * q * r
e[2] <- p * q * (1-r)
e[3] <- p * (1-q) * r
e[4] <- p * (1-q) * (1-r) + (1-p)
llk <- sum(data * log(e))
if (is.na(llk))
llk <- -1e+10
if (llk == -Inf)
llk <- -1e+10
return(-llk)
}
# The call to optim:
optim(runif(3, 0.1, 0.9), llk.tree, data = c(24, 65, 30, 61), method = "L-BFGS-B", lower = rep(0, 3), upper = rep(1, 3))
This returns the correct result!

Applying nlminb to subsets of data (by index or label) and store what the program returns as a new data frame

I was wondering if anyone could kindly help me with this seemingly easy task. I'm using nlminb to conduct optimization and compute some statistics by index. Here's an example from nlminb help.
> x <- rnbinom(100, mu = 10, size = 10)
> hdev <- function(par) {
+ -sum(dnbinom(x, mu = par[1], size = par[2], log = TRUE))
+ }
> nlminb(c(9, 12), hdev)
$par
[1] 9.730000 5.954936
$objective
[1] 297.2074
$convergence
[1] 0
$message
[1] "relative convergence (4)"
$iterations
[1] 10
$evaluations
function gradient
12 27
Suppose I generate random variables x, y, and z where z acts as an index (from 1 to 3).
> x <- rnbinom(100, mu = 10, size = 10)
> y <- rnbinom(100, mu = 10, size = 10)
> z <- rep(1:3, length=100)
> A <- cbind(x,y,z)
> hdev <- function(par) {
+ -sum(dnbinom(x+y, mu = par[1], size = par[2], log = TRUE))}
How can I apply nlminb(c(9, 12), hdev) to the data set by index z? In other words, I would like to compute nlminb(c(9, 12), hdev) for z=1, z=2, and z=3 separately. I tried by(A, z, function(A) nlminb(c(9,12), hdev)) and sparseby(A, z, function(A) nlminb(c(9,12), hdev)), but they return exactly the same values for each value of z.
I would like to turn each output into a new data frame so that it will become a 3X2 matrix.
[1] Z1_ANSWER_1 Z1_ANSWER_2
[2] Z2_ANSWER_1 Z2_ANSWER_2
[3] Z3_ANSWER_1 Z3_ANSWER_2
Since nlminb returns the summary of statistics, I needed to use CASEZ1<-nlminb$par, CASEZ2<-nlminb$par, CASEZ3<-nlminb$par and then use cbind to combine them. However, I would like to automate this process as the real data I'm working on has a lot more categories than z presented here.
If I'm not making myself clear, please let me know. I'll see if I can replicate the actual data set and functions I'm working on (I just don't have them on this computer).
Thank you very much in advance.
Let me try an approach
x <- rnbinom(100, mu = 10, size = 10)
y <- rnbinom(100, mu = 10, size = 10)
z <- rep(1:3, length=100)
A <- as.data.frame(cbind(x,y,z))
At first load the plyr library
library(plyr)
The following code returns the results for each z
dlply(A, .(z), function(x) {
hdev <- function(par, mydata) {-sum(dnbinom(mydata, mu = par[1], size = par[2], log = TRUE))}
nlminb(c(9, 12), hdev, mydata=t(as.vector(x[1] + as.vector(x[2]))))
}
)
Now, with this one you will get a 3x2 dataframe with the $par results
ddply(A, .(z), function(x) {
hdev <- function(par, mydata) {-sum(dnbinom(mydata, mu = par[1], size = par[2], log = TRUE))}
res <- nlminb(c(9, 12), hdev, mydata=t(as.vector(x[1] + as.vector(x[2]))))
return(res$par)
}
)

Resources