In Apache JMeter, what is the difference between uniform random timer and Gaussian Random Timer?
The Uniform Random Timer pauses the thread by a factor of:
The next pseudorandom uniformly-distributed value in range between 0.0 (inclusive) and 1.0 (exclusive)
Multiplied by “Random Delay Maximum”
Plus “Constant Delay Offset”
So the formula is: |0.X * Random Delay Maximum + Constant Delay Offset| where X an be the value from 0 to 9 inclusively.
Gaussian Random Timer basically uses the same formula, but X is calculated using Marsaglia polar method instead of uniform pseudo-random.
See A Comprehensive Guide to Using JMeter Timers for more detailed explanation of how do above and other JMeter Timers work
Related
I am wondering if anyone can provide me links or idea about how i can calculate stochastic npv after monte carlo simulation and how i can calculate probability of npv>0? We first calculated deterministic npv with all the assumptions and then I took some important parameters where I can assign uncertainty and then assigned them uniform distribution (runif). But the probability of positive npv seems to be 0/1, nowhere in between, Is there something wrong with how i am calculating probability of positive npv?or how i am calculating npv_vec[i]?
...
a<- runif(100,10,20)
b<- runif(100,20,30)
npv_vec <- rep(NA,ndraw)
profit_vec <- rep(NA,ndraw)
for(i in 1:ndraw) {
npv_vec[i] <- NPV_fun(a_vec[i],b_vec[i])
profit_vec[i] <- ifelse(npv_vec[i]>0,1,0)
}
# calculate the probability of positive npv
pb_profit <- mean(profit_vec)
pb_profit
...
On a single flip of a coin, it comes up either heads or tails. This does not mean that the probability of heads is either 0 or 1. To estimate that probability you have to perform multiple trials of the coin flip, and determine the proportion of flips which are heads.
Similarly, the probability of NPV>0 is 0 or 1 with no chance of anything in between. As with coin flips, you determine the probability based on multiple trials and calculating the proportion of trials which had NPV>0.
I have an an array of n integer values x[] that range from low to height. There are therefore m:=high-low+1 possible values.
I'm searching now an algorithm that calculates how uniform the input values are distributed over the inteval [low,high].
It should output e.g. 1 if the values are as uniformly as possible and 0 if all x[i] are the same.
The problem now is that the algorithm has to work with n beging much lower than and also much higher than m.
Thank you
You can compute the Kolmogorov-Smirnov statistic, which is the maximum absolute deviation of the empirical cumulative mass function from the test cmf, which in this case is a straight line (since the test pmf is a uniform distribution).
Or you can compute the discrepancy of the data.
I found a solution that works for my case:
First I calculate a cummulative histogram of the values
(a discrete function that maps every possible value v of [min,max] to |{x[i], x[i]<=v}|)
Then I compute the distance to the diagonal line through the histogram (from 0,0 to m,n) in a squared way: sum up the squared distances of every point in the histogram to that line.
This algorithm does not provide a normalized norm, but works well with very few and very many samples. I only need the algorithm to compare two or more sets of values by their uniformity and this algorithm does this for me.
I have a periodic term
v(x) = sum over K of [exp(iKx) V(K) ]
where K =2*pi*n/a where a is the periodicity of the term and n =0,1,2,3....
Now I want to find the Fourier coefficient V(K) corresponding to a particular K. Suppose I have a vector for v(x) having 10000 points for
x = 0,0.01a,0.02a,...a,1.01a,....2a....100a
such that the size of my lattice is 100a. FFT on this vector gives 10000 Fourier coefficients. The K values corresponding to these Fourier coefficients are 2*pi*n/(10000*0.01) with n=0,1,2,3,...9999.
But my K had the form 2*pi*n/a due to the periodicity of the lattice. What am I missing ?
Your function probably is not complex, so you will need negative frequencies in the complex Fourier series expression. During the FFT this does not matter since the negative frequencies are aliased to the higher positive frequencies, but in the expression as continuous function this could give strange results.
That means that the range of n is from -N/2 to N/2-1 if N is the size of the sampling.
Note that the points you have given are 10001 in number if you start at 0a with 0.01a steps and end at 100a. So the last point for N=10000 points should be 100a-0.01a=99.99a.
Your sampling frequency is the reciprocal of the sampling step, Fs=1/(0.01a). The frequencies of the FFT are then 2*pi*n/N*Fs=2*pi*n/(10000*0.01a)=2*pi*n/(100*a), every 100th of them corresponds to one of your K.
This is not astonishing since the sampling is over 100 periods of the function, the longer period results in a much lower basic frequency. If the signal v(x) is truly periodic, all amplitudes except the ones for n divisible by 100 will be zero. If the signal is not exactly periodic due to noise and measurement errors, the peaks will leak out into neighboring frequencies. For a correct result for the original task you will have to integrate the amplitudes over the peaks.
I am looking for a math equation or algorithm which can generate uniform random numbers in ascending order in the range [0,1] without the help of division operator. i am keen in skipping the division operation because i am implementing it in hardware. Thank you.
Generating the numbers in ascending (or descending) order means generating them sequentially but with the right distribution. That, in turn, means we need to know the distribution of the minimum of a set of size N, and then at each stage we need to use conditioning to determine the next value based on what we've already seen. Mathematically these are both straightforward except for the issue of avoiding division.
You can generate the minimum of N uniform(0,1)'s from a single uniform(0,1) random number U using the algorithm min = 1 - U**(1/N), where ** denotes exponentiation. In other words, the complement of the Nth root of a uniform has the same distribution as the minimum of N uniforms over the range [0,1], which can then be scaled to any other interval length you like.
The conditioning aspect basically says that the k values already generated will have eaten up some portion of the original interval, and that what we now want is the minimum of N-k values, scaled to the remaining range.
Combining the two pieces yields the following logic. Generate the smallest of the N uniforms, scale it by the remaining interval length (1 the first time), and make that result the last value we have generated. Then generate the smallest of N-1 uniforms, scale it by the remaining interval length, and add it to the last one to give you your next value. Lather, rinse, repeat, until you have done them all. The following Ruby implementation gives distributionally correct results, assuming you have read in or specified N prior to this:
last_u = 0.0
N.downto(1) do |i|
p last_u += (1.0 - last_u) * (1.0 - (rand ** (1.0/i)))
end
but we have that pesky ith root which uses division. However, if we know N ahead of time, we can pre-calculate the inverses of the integers from 1 to N offline and table them.
last_u = 0.0
N.downto(1) do |i|
p last_u += (1.0 - last_u) * (1.0 - (rand ** inverse[i]))
end
I don't know of any way get the correct distributional behavior sequentially without using exponentiation. If that's a show-stopper, you're going to have to give up on either the sequential nature of the process or the uniformity requirement.
You can try so-called "stratified sampling", which means you divide the range into bins and then sample randomly from bins. A sample thus generated is more uniform (less clumping) than a sample generated from the entire interval. For this reason, stratified sampling reduces the variance of Monte Carlo estimates (I don't suppose that's important to you, but that's why the method was invented, as a reduction of variance method).
It is an interesting problem to generate numbers in order, but my guess is that to get a uniform distribution over the entire interval, you will have to apply some formulas which require more computation. If you want to minimize computation time, I suspect you cannot do better than generating a sample and then sorting it.
I've implemented a simple autocorrelation routine against some audio samples at a rate of 44100.0 with a block size of 2048.
The general formula I am following looks like this:
r[k] = a[k] * b[k] = ∑ a[n] • b[n + k]
and I've implemented it in a brute-force nested loop as follows:
for k = 0 to N-1 do
for n = 0 to N-1 do
if (n+k) < N
then r[k] := r[k] + a(n)a(n+k)
else
break;
end for n;
end for k;
I look for the max magnitude in r and determine how many samples away it is and calculate the frequency.
To help temper the tuner's results, I am using a circular buffer and returning the median each time.
The brute force calculations are a bit slow - is there a well-known, faster way to do them?
Sometimes, the tuner just isn't quite as accurate as is needed. What type of heuristics can I apply here to help refine the results?
Sometimes the OCTAVE is incorrect - is there a way to hone in on the correct octave a bit more accurately?
The efficient way to do autocorrelation is with an FFT:
FFT the time domain signal
convert complex FFT output to magnitude and zero phase (i.e. power spectrum)
take inverse FFT
This works because autocorrelation in the time domain is equivalent to power spectrum in the frequency domain.
Having said that, bare bones autocorrelation is not a great way to implement (accurate) pitch detection in general, so you might want to have a rethink about your whole approach.
One simple way to improve this "brute force" autocorrelation method is to limit the range of k and only search for lags (or pitch periods) near the previous average period, say within +-0.5 semitones at first. If you don't find a correlation, then search a slightly wider range, say, a within a major third, then search a wider range but within the expected frequency range of the instrument being tuned.
You can get higher frequency resolution by using a higher sample rate (say, upsampling the data before the autocorrelation if necessary, and with proper filtering).
You will get autocorrelation peaks for the pitch lag (period) and for multiples of that lag. You will have to eliminate those subharmonics somehow (maybe as impossible for the instrument, or perhaps as an unlikely pitch jump from the previous frequency estimations.)
I don't fully understand the question, but I can point out one trick that you might be able to use. You say you look for the sample that is the max magnitude. If it is useful in the rest of your calculations, you can calculate that sample number to sub-sample precision.
Let's say you have a peak of 0.9 at sample 5 and neighboring samples of 0.1 and 0.8. The actual peak is probably somewhere between sample 5 and sample 6.
(0.1 * 4 + 0.9 * 5 + 0.8 * 6) / (0.1 + 0.9 + 0.8) = 5.39