Calculating NPV using Monte Carlo simulation - for-loop

I am wondering if anyone can provide me links or idea about how i can calculate stochastic npv after monte carlo simulation and how i can calculate probability of npv>0? We first calculated deterministic npv with all the assumptions and then I took some important parameters where I can assign uncertainty and then assigned them uniform distribution (runif). But the probability of positive npv seems to be 0/1, nowhere in between, Is there something wrong with how i am calculating probability of positive npv?or how i am calculating npv_vec[i]?
...
a<- runif(100,10,20)
b<- runif(100,20,30)
npv_vec <- rep(NA,ndraw)
profit_vec <- rep(NA,ndraw)
for(i in 1:ndraw) {
npv_vec[i] <- NPV_fun(a_vec[i],b_vec[i])
profit_vec[i] <- ifelse(npv_vec[i]>0,1,0)
}
# calculate the probability of positive npv
pb_profit <- mean(profit_vec)
pb_profit
...

On a single flip of a coin, it comes up either heads or tails. This does not mean that the probability of heads is either 0 or 1. To estimate that probability you have to perform multiple trials of the coin flip, and determine the proportion of flips which are heads.
Similarly, the probability of NPV>0 is 0 or 1 with no chance of anything in between. As with coin flips, you determine the probability based on multiple trials and calculating the proportion of trials which had NPV>0.

Related

Sampling from Geometric distribution in constant time

I would like to know if there is any method to sample form the Geometric distribution in constant time without using log which can be hard to approximate. Thanks.
Without relying on logarithms, there is no algorithm to sample from a geometric(p) distribution in constant expected time. Rather, on a realistic computing model, such an algorithm's expected running time must grow at least as fast as 1 + log(1/p)/w, where w is the word size of the computer in bits (Bringmann and Friedrich 2013). The following algorithm, which is equivalent to one in the Bringmann paper, generates a geometric(px/py) random number without relying on logarithms, and when px/py is very small, the algorithm is considerably faster than the trivial algorithm of generating trials until a success:
Set pn to px, k to 0, and d to 0.
While pn*2 <= py, add 1 to k and multiply pn by 2.
With probability (1− px /py)2k, add 1 to d and repeat this step.
Generate a uniform random integer in [0, 2k), call it m, then with probability (1− px /py)m, return d*2k+m. Otherwise, repeat this step.
(The actual algorithm described in the Bringmann paper is in fact much more involved than this; see my note "On a Geometric Sampler".)
REFERENCES:
Bringmann, K., and Friedrich, T., 2013, July. Exact and efficient generation of geometric random variates and random graphs, in International Colloquium on Automata, Languages, and Programming (pp. 267-278).

discrete fourier transform in Matlab - theoretical confusion

I have a periodic term
v(x) = sum over K of [exp(iKx) V(K) ]
where K =2*pi*n/a where a is the periodicity of the term and n =0,1,2,3....
Now I want to find the Fourier coefficient V(K) corresponding to a particular K. Suppose I have a vector for v(x) having 10000 points for
x = 0,0.01a,0.02a,...a,1.01a,....2a....100a
such that the size of my lattice is 100a. FFT on this vector gives 10000 Fourier coefficients. The K values corresponding to these Fourier coefficients are 2*pi*n/(10000*0.01) with n=0,1,2,3,...9999.
But my K had the form 2*pi*n/a due to the periodicity of the lattice. What am I missing ?
Your function probably is not complex, so you will need negative frequencies in the complex Fourier series expression. During the FFT this does not matter since the negative frequencies are aliased to the higher positive frequencies, but in the expression as continuous function this could give strange results.
That means that the range of n is from -N/2 to N/2-1 if N is the size of the sampling.
Note that the points you have given are 10001 in number if you start at 0a with 0.01a steps and end at 100a. So the last point for N=10000 points should be 100a-0.01a=99.99a.
Your sampling frequency is the reciprocal of the sampling step, Fs=1/(0.01a). The frequencies of the FFT are then 2*pi*n/N*Fs=2*pi*n/(10000*0.01a)=2*pi*n/(100*a), every 100th of them corresponds to one of your K.
This is not astonishing since the sampling is over 100 periods of the function, the longer period results in a much lower basic frequency. If the signal v(x) is truly periodic, all amplitudes except the ones for n divisible by 100 will be zero. If the signal is not exactly periodic due to noise and measurement errors, the peaks will leak out into neighboring frequencies. For a correct result for the original task you will have to integrate the amplitudes over the peaks.

generating sorted random numbers without exponentiation involved?

I am looking for a math equation or algorithm which can generate uniform random numbers in ascending order in the range [0,1] without the help of division operator. i am keen in skipping the division operation because i am implementing it in hardware. Thank you.
Generating the numbers in ascending (or descending) order means generating them sequentially but with the right distribution. That, in turn, means we need to know the distribution of the minimum of a set of size N, and then at each stage we need to use conditioning to determine the next value based on what we've already seen. Mathematically these are both straightforward except for the issue of avoiding division.
You can generate the minimum of N uniform(0,1)'s from a single uniform(0,1) random number U using the algorithm min = 1 - U**(1/N), where ** denotes exponentiation. In other words, the complement of the Nth root of a uniform has the same distribution as the minimum of N uniforms over the range [0,1], which can then be scaled to any other interval length you like.
The conditioning aspect basically says that the k values already generated will have eaten up some portion of the original interval, and that what we now want is the minimum of N-k values, scaled to the remaining range.
Combining the two pieces yields the following logic. Generate the smallest of the N uniforms, scale it by the remaining interval length (1 the first time), and make that result the last value we have generated. Then generate the smallest of N-1 uniforms, scale it by the remaining interval length, and add it to the last one to give you your next value. Lather, rinse, repeat, until you have done them all. The following Ruby implementation gives distributionally correct results, assuming you have read in or specified N prior to this:
last_u = 0.0
N.downto(1) do |i|
p last_u += (1.0 - last_u) * (1.0 - (rand ** (1.0/i)))
end
but we have that pesky ith root which uses division. However, if we know N ahead of time, we can pre-calculate the inverses of the integers from 1 to N offline and table them.
last_u = 0.0
N.downto(1) do |i|
p last_u += (1.0 - last_u) * (1.0 - (rand ** inverse[i]))
end
I don't know of any way get the correct distributional behavior sequentially without using exponentiation. If that's a show-stopper, you're going to have to give up on either the sequential nature of the process or the uniformity requirement.
You can try so-called "stratified sampling", which means you divide the range into bins and then sample randomly from bins. A sample thus generated is more uniform (less clumping) than a sample generated from the entire interval. For this reason, stratified sampling reduces the variance of Monte Carlo estimates (I don't suppose that's important to you, but that's why the method was invented, as a reduction of variance method).
It is an interesting problem to generate numbers in order, but my guess is that to get a uniform distribution over the entire interval, you will have to apply some formulas which require more computation. If you want to minimize computation time, I suspect you cannot do better than generating a sample and then sorting it.

Algorithm for categorizing values

What would be the best algorithm to solve this problem? I spent a couple of hours on this problem. But couldn't sort it out.
A guy purchased a necklace and planned to make it into two pieces in such a way that the average brightness of each piece should be either greater than or equal to the original piece.
The criteria for dividing the necklaces are
1.The difference in number of pearls between the two pearls sets should not be greater than 10% of the number of pearls in the original necklace or 3 whichever is higher.
2.The difference between number of pearls in 2 necklaces should be minimum.
3.In case if the average brightness of any one of the necklace is less than the average brightness of the original set return 0 as output.
4.Two necklaces should have their average brightness greater than the original one and the difference between the average brightness of the two pieces is minimum.
5.The average brightness of each piece should be either greater than or equal to the original piece.
This problem is rather hard to do efficiently (in NP somewhere).
Say you had a set that averaged to X. That is, X = (x1 + x2 + ... + xn) / n.
Suppose you break it up into sets that average to S and T with s and t items in each set, respectively.
You can mathematically prove that if one of the averages, S or T, is greater than X, the other of the two must be less than X.
Hence, the two sets must have exactly the same brightness because that's the only way your conditions are satisfiable.
Knowing this, you're ending up with the sumset sum problem -- you want to find a subset that sums to exactly half of the sum of the entire set. That's a problem that's known to be hard. (It's been classified NP. And alright, it's not exactly the same as the subset sum problem, but if subtract the average of the full set from each of the brightness values, solving the subset sum problem will give you your answer. (Do the reverse to see how you can solve the subset sum problem from your problem.)
Hence, there's no fast way of doing this -- only approximations or exponential running times... However, maybe this will help. It mentions better running times if your weights (in your case, brightness levels) are bounded.

How do I determine the bias of an algorithm?

Let's say I have an algorithm that is supposed to represent a coin flip. How do I determine the bias of this coin? Specifically, I have written the algorithm in this JSFiddle.
The fiddle runs a series of 20 tests. Each test flips the coin 100 times and tallies the results. At the end of the series it reports Heads/Tails for the total number of flips across all tests. This result seems to be approaching 1 (from both sides), but I have not done any rigorous testing of this.
Note, this is not homework. This is purely a personal interest.
You can't come up with a way to guarantee detecting a bias, but you can determine it to a certain degree of certainty (say 95%). What you do is test n times and count how many times you get heads, call this variable h.
Then if h / n < 0.5 - 1.96 * sqrt(0.25 / n) then the coin is biased towards tails (with a 95% probability) and if h / n > 0.5 + 1.96 * sqrt(0.25 / n) then the coin is biased towards heads.
This decision is based on something called normal approximation to a binomial distribution, you can read more about it here: http://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval#Normal_approximation_interval
George Marsaglia Diehard Tests
if you consider your Head/Tail as generating 0 1 bits. Using them you can generate random numbers and test their randomness

Resources