I have a periodic term
v(x) = sum over K of [exp(iKx) V(K) ]
where K =2*pi*n/a where a is the periodicity of the term and n =0,1,2,3....
Now I want to find the Fourier coefficient V(K) corresponding to a particular K. Suppose I have a vector for v(x) having 10000 points for
x = 0,0.01a,0.02a,...a,1.01a,....2a....100a
such that the size of my lattice is 100a. FFT on this vector gives 10000 Fourier coefficients. The K values corresponding to these Fourier coefficients are 2*pi*n/(10000*0.01) with n=0,1,2,3,...9999.
But my K had the form 2*pi*n/a due to the periodicity of the lattice. What am I missing ?
Your function probably is not complex, so you will need negative frequencies in the complex Fourier series expression. During the FFT this does not matter since the negative frequencies are aliased to the higher positive frequencies, but in the expression as continuous function this could give strange results.
That means that the range of n is from -N/2 to N/2-1 if N is the size of the sampling.
Note that the points you have given are 10001 in number if you start at 0a with 0.01a steps and end at 100a. So the last point for N=10000 points should be 100a-0.01a=99.99a.
Your sampling frequency is the reciprocal of the sampling step, Fs=1/(0.01a). The frequencies of the FFT are then 2*pi*n/N*Fs=2*pi*n/(10000*0.01a)=2*pi*n/(100*a), every 100th of them corresponds to one of your K.
This is not astonishing since the sampling is over 100 periods of the function, the longer period results in a much lower basic frequency. If the signal v(x) is truly periodic, all amplitudes except the ones for n divisible by 100 will be zero. If the signal is not exactly periodic due to noise and measurement errors, the peaks will leak out into neighboring frequencies. For a correct result for the original task you will have to integrate the amplitudes over the peaks.
Related
I am wondering if anyone can provide me links or idea about how i can calculate stochastic npv after monte carlo simulation and how i can calculate probability of npv>0? We first calculated deterministic npv with all the assumptions and then I took some important parameters where I can assign uncertainty and then assigned them uniform distribution (runif). But the probability of positive npv seems to be 0/1, nowhere in between, Is there something wrong with how i am calculating probability of positive npv?or how i am calculating npv_vec[i]?
...
a<- runif(100,10,20)
b<- runif(100,20,30)
npv_vec <- rep(NA,ndraw)
profit_vec <- rep(NA,ndraw)
for(i in 1:ndraw) {
npv_vec[i] <- NPV_fun(a_vec[i],b_vec[i])
profit_vec[i] <- ifelse(npv_vec[i]>0,1,0)
}
# calculate the probability of positive npv
pb_profit <- mean(profit_vec)
pb_profit
...
On a single flip of a coin, it comes up either heads or tails. This does not mean that the probability of heads is either 0 or 1. To estimate that probability you have to perform multiple trials of the coin flip, and determine the proportion of flips which are heads.
Similarly, the probability of NPV>0 is 0 or 1 with no chance of anything in between. As with coin flips, you determine the probability based on multiple trials and calculating the proportion of trials which had NPV>0.
I would like to know if there is any method to sample form the Geometric distribution in constant time without using log which can be hard to approximate. Thanks.
Without relying on logarithms, there is no algorithm to sample from a geometric(p) distribution in constant expected time. Rather, on a realistic computing model, such an algorithm's expected running time must grow at least as fast as 1 + log(1/p)/w, where w is the word size of the computer in bits (Bringmann and Friedrich 2013). The following algorithm, which is equivalent to one in the Bringmann paper, generates a geometric(px/py) random number without relying on logarithms, and when px/py is very small, the algorithm is considerably faster than the trivial algorithm of generating trials until a success:
Set pn to px, k to 0, and d to 0.
While pn*2 <= py, add 1 to k and multiply pn by 2.
With probability (1− px /py)2k, add 1 to d and repeat this step.
Generate a uniform random integer in [0, 2k), call it m, then with probability (1− px /py)m, return d*2k+m. Otherwise, repeat this step.
(The actual algorithm described in the Bringmann paper is in fact much more involved than this; see my note "On a Geometric Sampler".)
REFERENCES:
Bringmann, K., and Friedrich, T., 2013, July. Exact and efficient generation of geometric random variates and random graphs, in International Colloquium on Automata, Languages, and Programming (pp. 267-278).
I made a program for finding a triple integral of an f(x,y,z) function over a general region, but it's not working for some reason.
Here's an excerpt of the program, for when the order of integration is dz dy dx:
(B-A)/N→D
0→V
Dsum(seq(fnInt(Y₅,Y,Y₉,Y₀),X,A+.5D,B,D))→V
For(K,1,P)
A+(B-A)rand→X
Y₉+(Y₀-Y₉)rand→Y
Y₇+(Y₈-Y₇)rand→Z
Y₆→ʟW(K)
End
Vmean(ʟW)→V
Variables used explained below:
Y₆: Equation of f(x,y,z)
Y₇,Y₈: Lower and upper bounds of the innermost integral (dz)
Y₉,Y₀: Lower and upper bounds of the middle integral (dy)
A,B: Lower and upper bounds of the outermost integral (dx)
Y₅: Y₈-Y₇
N: Number of Δx intervals
D: Size of Δx interval
P: Number of points on D to guess the average value of f(x,y,z)
ʟW: List of various values of f(x,y,z)
V: Volume of the region of integration, then of the entire triple integral
So here's how I'm approaching it:
I first find the volume of just the region of integration using Dsum(seq(fnInt(Y₅,Y,Y₉,Y₀),X,A+.5D,B,D)). Then I pick a bunch of random (x,y,z) points in that region, and I plug those points into f(x,y,z) to generate a long list of various values for w = f(x,y,z). I then take the average of those w-values, which should give me a pretty good estimate for the average "height" of the 4D solid that is the triple integral; and by multiplying the region of integration "base" with the average w-value "height" (Vmean(ʟW)), it should give me a good estimate for the hypervolume of the triple integral.
It should naturally follow that as the number of (x,y,z) points tested increases, the value of the triple integral should more or less converge to the actual value.
For some reason, it doesn't. For some integrals it works fantastically, for others it misses by a long shot. Good example of this is ∫[0, 2] ∫[0, 2-x] ∫[0, 2-x-y] 2x dz dy dx. The correct answer is 4/3 or 1.333..., but the program converges to a completely different number: 2.67, give or take.
Why is it doing this? Why is the triple integral converging to a wrong number?
EDIT: My guess is—assuming I didn't make any mistakes, for which there are no promises—that the RNG algorithm used by the calculator can only generate numbers slightly greater than 0 and is throwing the program off, but I have no way to confirm this, nor to account for it since "slightly greater than 0" isn't quantified.
I have an an array of n integer values x[] that range from low to height. There are therefore m:=high-low+1 possible values.
I'm searching now an algorithm that calculates how uniform the input values are distributed over the inteval [low,high].
It should output e.g. 1 if the values are as uniformly as possible and 0 if all x[i] are the same.
The problem now is that the algorithm has to work with n beging much lower than and also much higher than m.
Thank you
You can compute the Kolmogorov-Smirnov statistic, which is the maximum absolute deviation of the empirical cumulative mass function from the test cmf, which in this case is a straight line (since the test pmf is a uniform distribution).
Or you can compute the discrepancy of the data.
I found a solution that works for my case:
First I calculate a cummulative histogram of the values
(a discrete function that maps every possible value v of [min,max] to |{x[i], x[i]<=v}|)
Then I compute the distance to the diagonal line through the histogram (from 0,0 to m,n) in a squared way: sum up the squared distances of every point in the histogram to that line.
This algorithm does not provide a normalized norm, but works well with very few and very many samples. I only need the algorithm to compare two or more sets of values by their uniformity and this algorithm does this for me.
I am looking for a math equation or algorithm which can generate uniform random numbers in ascending order in the range [0,1] without the help of division operator. i am keen in skipping the division operation because i am implementing it in hardware. Thank you.
Generating the numbers in ascending (or descending) order means generating them sequentially but with the right distribution. That, in turn, means we need to know the distribution of the minimum of a set of size N, and then at each stage we need to use conditioning to determine the next value based on what we've already seen. Mathematically these are both straightforward except for the issue of avoiding division.
You can generate the minimum of N uniform(0,1)'s from a single uniform(0,1) random number U using the algorithm min = 1 - U**(1/N), where ** denotes exponentiation. In other words, the complement of the Nth root of a uniform has the same distribution as the minimum of N uniforms over the range [0,1], which can then be scaled to any other interval length you like.
The conditioning aspect basically says that the k values already generated will have eaten up some portion of the original interval, and that what we now want is the minimum of N-k values, scaled to the remaining range.
Combining the two pieces yields the following logic. Generate the smallest of the N uniforms, scale it by the remaining interval length (1 the first time), and make that result the last value we have generated. Then generate the smallest of N-1 uniforms, scale it by the remaining interval length, and add it to the last one to give you your next value. Lather, rinse, repeat, until you have done them all. The following Ruby implementation gives distributionally correct results, assuming you have read in or specified N prior to this:
last_u = 0.0
N.downto(1) do |i|
p last_u += (1.0 - last_u) * (1.0 - (rand ** (1.0/i)))
end
but we have that pesky ith root which uses division. However, if we know N ahead of time, we can pre-calculate the inverses of the integers from 1 to N offline and table them.
last_u = 0.0
N.downto(1) do |i|
p last_u += (1.0 - last_u) * (1.0 - (rand ** inverse[i]))
end
I don't know of any way get the correct distributional behavior sequentially without using exponentiation. If that's a show-stopper, you're going to have to give up on either the sequential nature of the process or the uniformity requirement.
You can try so-called "stratified sampling", which means you divide the range into bins and then sample randomly from bins. A sample thus generated is more uniform (less clumping) than a sample generated from the entire interval. For this reason, stratified sampling reduces the variance of Monte Carlo estimates (I don't suppose that's important to you, but that's why the method was invented, as a reduction of variance method).
It is an interesting problem to generate numbers in order, but my guess is that to get a uniform distribution over the entire interval, you will have to apply some formulas which require more computation. If you want to minimize computation time, I suspect you cannot do better than generating a sample and then sorting it.