Calculate certainty of Monte Carlo simulation - probability

Let's say that we use the Monte Carlo method to estimate the area of an object, in the exact same way you'd use it to estimate the value of π.
Now, let's say we want to calculate the certainty of our simulation result. We've cast n samples, m of which landed inside the object, so the area of the object is approximately m/n of the total sampled area. We would like to make a statement such as:
"We are 99% certain that the area of the object is between a1 and a2."
How can we calculate a1 and a2 above (given n, m, total area, and the desired certainty)?
Here is a program which attempts to estimate this bound numerically. Here the samples are points in [0,1), and the object is the segment [0.25,0.75). It prints a1 and a2 for 50%, 90%, and 99%, for a range of sample counts:
import std.algorithm;
import std.random;
import std.range;
import std.stdio;
void main()
{
foreach (numSamples; iota(0, 1000+1, 100).filter!(n => n > 0))
{
auto samples = new double[numSamples];
enum objectStart = 0.25;
enum objectEnd = 0.75;
enum numTotalSamples = 10_000_000;
auto numSizes = numTotalSamples / numSamples;
auto sizes = new double[numSizes];
foreach (ref size; sizes)
{
size_t numHits;
foreach (i; 0 .. numSamples)
{
auto sample = uniform01!double;
if (sample >= objectStart && sample < objectEnd)
numHits++;
}
size = 1.0 / numSamples * numHits;
}
sizes.sort;
writef("%d samples:", numSamples);
foreach (certainty; [50, 90, 99])
{
auto centerDist = numSizes * certainty / 100 / 2;
auto startPos = numSizes / 2 - centerDist;
auto endPos = numSizes / 2 + centerDist;
writef("\t%.5f..%.5f", sizes[startPos], sizes[endPos]);
}
writeln;
}
}
(Run it online.) It outputs:
// 50% 90% 99%
100 samples: 0.47000..0.53000 0.42000..0.58000 0.37000..0.63000
200 samples: 0.47500..0.52500 0.44500..0.56000 0.41000..0.59000
300 samples: 0.48000..0.52000 0.45333..0.54667 0.42667..0.57333
400 samples: 0.48250..0.51750 0.46000..0.54250 0.43500..0.56500
500 samples: 0.48600..0.51600 0.46400..0.53800 0.44200..0.55800
600 samples: 0.48667..0.51333 0.46667..0.53333 0.44833..0.55167
700 samples: 0.48714..0.51286 0.46857..0.53143 0.45000..0.54857
800 samples: 0.48750..0.51250 0.47125..0.53000 0.45375..0.54625
900 samples: 0.48889..0.51111 0.47222..0.52667 0.45778..0.54111
1000 samples: 0.48900..0.51000 0.47400..0.52500 0.45800..0.53900
Is it possible to precisely calculate these numbers instead?
(Context: I'd like to add something like "±X.Y GB with 99% certainty" to btdu)

Ok, with question being language agnostic, here is the illustration how to do error estimation with Monte-Carlo.
Suppose, you want to compute integral
I = S01 f(x) dx
where f(x) is simple polynomial function
f(x) = xn
Here is the illustration of the calculations.
For that you have to compute not only mean value, but standard deviation as well.
Then, knowing that Monte Carlo error is going down as inverse square root of number of samples, computing confidence interval is simple
Code, Python 3.7, Windows 10 x64
import numpy as np
rng = np.random.default_rng()
N = 100000
n = 2
def f(x):
return np.power(x, n)
sample = f(rng.random(N)) # N samples of the function
m = np.mean(sample) # mean value of the sample, approaching integral value as N->∞
s = np.std(sample, ddof=1) # standard deviation with Bessel correction
e = s / np.sqrt(N) # Monte Carlo error decreases as inverse square root
t = 2.576 # For 99% confidence interval, we should take 2.58 sigma, per Gaussian distribution
#t = 3.00 # For 99.7% confidence interval, we should take 3 sigma, per Gaussian distribution
print(f'True integral value is {1.0/(1.0+n)}')
print(f'Computed integral value is in the range [{m-t*e}...{m+t*e}] with 99% confidence')
will print something like
True integral value is 0.3333333333333333
Computed integral value is in the range
[0.33141772204489295...0.3362795491124624] with 99% confidence
You could use Z-score table, line this one along the lines, to print table you want. You could vary N to get desired N dependency
zscore = {'50%': 0.674, '80%': 1.282, '90%': 1.645, '95%': 1.960, '98%': 2.326, '99%': 2.576, '99.7%': 3.0}
for c, z in zscore.items():
print(f'Computed integral value is in the range [{m-z*e}...{m+z*e}] with {c} confidence')

Based on Severin's answer, here is the code to calculate the values as stated in the question:
def calculate_error(n, m, z):
p = m / n
std_dev = (p * (1 - p)) ** 0.5 # Standard deviation of Bernoulli variable
error = std_dev / n ** 0.5 # Monte Carlo error decreases as inverse square root
return (mean - z * error, mean + z * error)
n = 1000
z = 2.576 # For 99% confidence interval, we should take 2.58 sigma, per Gaussian distribution
print(calculate_error(n, n * 0.5, z))

Related

Speeding up loop-rich Matlab function to calculate temperature distribution

I would like to speed up this function as much as possible in Matlab.
This is part of a bigger simulation project, and as it is one of the most called functions within the simulation, this is crucial.
For now, I tried generating a MEX file, but the speed was not better.
Vectorizing seems difficult (but would be beneficial due to the nested loops), given the non-linear operations.
function y = mixing(T,dis,rr,n)
%% ===================================================================
% input: temperature of cells array T, distance array dis, number of cells
% n, mixing ratio r
%
% output: new temperature array
%
% purpose: calculates the temperature array of next timestep
% ===================================================================
for j = 1:n
i = 1;
r = rr;
while i < dis(j)+1 && j+i <= n
if (dis(j) < i)
r = r*(dis(j)-floor(dis(j)));
end
d = T(j+i-1);
T(j+i-1) = r*T(j+i) + (1-r)*T(j+i-1);
T(j+i) = r*d + (1-r)*T(j+i);
i = i + 1;
end
end
y = T;
end
Any ideas on how to speed-up this Matlab function?
Inputs: T is a 10x1 double, dis is a 10x1 double, rr is a 1x1 double, and n is a 1x1 integer value.
Example values: T = random('unif',55,65,10,1); dis = repmat(0.1,10,1); rr = rand; n = 10;
What I'm trying to compute with this is the degree of temperature mixing between water layers, given by the equations for T(j+i-1) and T(j+i).
This must be calculated for all , this is for all the layers, at all timesteps (note that are the total number of water layers).

How to fix this floating point square root algorithm

I am trying to compute the IEEE-754 32-bit Floating Point Square Root of various inputs but for one particular input the below algorithm based upon the Newton-Raphson method won't converge, I am wondering what I can do to fix the problem? For the platform I am designing I have a 32-bit floating point adder/subtracter, multiplier, and divider.
For input 0x7F7FFFFF (3.4028234663852886E38)., the algorithm won't converge to the correct answer of 18446743523953729536.000000 This algorithm's answer gives 18446743523953737728.000000.
I am using MATLAB to implement my code before I implement this in hardware. I can only use single precision floating point values, (SO NO DOUBLES).
clc; clear; close all;
% Input
R = typecast(uint32(hex2dec(num2str(dec2hex(((hex2dec('7F7FFFFF'))))))),'single')
% Initial estimate
OneOverRoot2 = single(1/sqrt(2));
Root2 = single(sqrt(2));
% Get low and high bits of input R
hexdata_high = bitand(bitshift(hex2dec(num2hex(single(R))),-16),hex2dec('ffff'));
hexdata_low = bitand(hex2dec(num2hex(single(R))),hex2dec('ffff'));
% Change exponent of input to -1 to get Mantissa
temp = bitand(hexdata_high,hex2dec('807F'));
Expo = bitshift(bitand(hexdata_high,hex2dec('7F80')),-7);
hexdata_high = bitor(temp,hex2dec('3F00'));
b = typecast(uint32(hex2dec(num2str(dec2hex(((bitshift(hexdata_high,16)+ hexdata_low)))))),'single');
% If exponent is odd ...
if (bitand(Expo,1))
% Pretend the mantissa [0.5 ... 1.0) is multiplied by 2 as Expo is odd,
% so it now has the value [1.0 ... 2.0)
% Estimate the sqrt(mantissa) as [1.0 ... sqrt(2))
% IOW: linearly map (0.5 ... 1.0) to (1.0 ... sqrt(2))
Mantissa = (Root2 - 1.0)/(1.0 - 0.5)*(b - 0.5) + 1.0;
else
% The mantissa is in range [0.5 ... 1.0)
% Estimate the sqrt(mantissa) as [1/sqrt(2) ... 1.0)
% IOW: linearly map (0.5 ... 1.0) to (1/sqrt(2) ... 1.0)
Mantissa = (1.0 - OneOverRoot2)/(1.0 - 0.5)*(b - 0.5) + OneOverRoot2;
end
newS = Mantissa*2^(bitshift(Expo-127,-1));
S=newS
% S = (S + R/S)/2 method
for j = 1:6
fprintf('S %u %f %f\n', j, S, (S-sqrt(R)));
S = single((single(S) + single(single(R)/single(S))))/2;
S = single(S);
end
goodaccuracy = (abs((single(S)-single(sqrt(single(R)))))) < 2^-23
difference = (abs((single(S)-single(sqrt(single(R))))))
% Get hexadecimal output
hexdata_high = (bitand(bitshift(hex2dec(num2hex(single(S))),-16),hex2dec('ffff')));
hexdata_low = (bitand(hex2dec(num2hex(single(S))),hex2dec('ffff')));
fprintf('FLOAT: T Input: %e\t\tCorrect: %e\t\tMy answer: %e\n', R, sqrt(R), S);
fprintf('output hex = 0x%04X%04X\n',hexdata_high,hexdata_low);
out = hex2dec(num2hex(single(S)));
I took a whack at this. Here's what I came up with:
float mysqrtf(float f) {
if (f < 0) return 0.0f/0.0f;
if (f == 1.0f / 0.0f) return f;
if (f != f) return f;
// half-ass an initial guess of 1.0.
int expo;
float foo = frexpf(f, &expo);
float s = 1.0;
if (expo & 1) foo *= 2, expo--;
// this is the only case for which what's below fails.
if (foo == 0x0.ffffffp+0) return ldexpf(0x0.ffffffp+0, expo/2);
// do four newton iterations.
for (int i = 0; i < 4; i++) {
float diff = s*s-foo;
diff /= s;
s -= diff/2;
}
// do one last newton iteration, computing s*s-foo exactly.
float scal = s >= 1 ? 4096 : 2048;
float shi = (s + scal) - scal; // high 12 bits of significand
float slo = s - shi; // rest of significand
float diff = shi * shi - foo; // subtraction exact by sterbenz's theorem
diff += 2 * shi * slo; // opposite signs; exact by sterbenz's theorem
diff += slo * slo;
diff /= s; // diff == fma(s, s, -foo) / s.
s -= diff/2;
return ldexpf(s, expo/2);
}
The first thing to analyse is the formula (s*s-foo)/s in floating-point arithmetic. If s is a sufficiently good approximation to sqrt(foo), Sterbenz's theorem tells us that the numerator is within an ulp(foo) of the right answer --- all of that error is approximation error from computing s*s. Then we divide by s; this gives us at worst another half-ulp of approximation error. So, even without a fused multiply-add, diff is within 1.5 ulp of what it should be. And we divide it by two.
Notice that the initial guess doesn't in and of itself matter as long as you follow it up with enough Newton iterations.
Measure the error of an approximation s to sqrt(foo) by abs(s - foo/s). The error of my initial guess of 1 is at most 1. A Newton iteration in exact arithmetic squares the error and divides it by 4. A Newton iteration in floating-point arithmetic --- the kind I do four times --- squares the error, divides it by 4, and kicks in another 0.75 ulp of error. You do this four times and you find you have a relative error at most 0x0.000000C4018384, which is about 0.77 ulp. This means that four Newton iterations yield a faithfully-rounded result.
I do a fifth Newton step to get a correctly-rounded square root. The reason why it works is a little more intricate.
shi holds the "top half" of s while slo holds the "bottom half." The last 12 bits in each significand will be zero. This means, in particular, that shi * shi and shi * slo and slo * slo are exactly representable as floats.
s*s is within two ulps of foo. shi*shi is within 2047 ulps of s*s. Thus shi * shi - foo is within 2049 ulps of zero; in particular, it's exactly representable and less than 2-10.
You can check that you can add 2 * shi * slo and get an exactly-representable result that's within 2-22 of zero and then add slo*slo and get an exactly representable result --- s*s-foo computed exactly.
When you divide by s, you kick in an additional half-ulp of error, which is at most 2-48 here since our error was already so small.
Now we do a Newton step. We've computed the current error correctly to within 2-46. Adding half of it to s gives us the square root to within 3*2-48.
To turn this into a guarantee of correct rounding, we need to prove that there are no floats between 1/2 and 2, other than the one I special-cased, whose square roots are within 3*2-48 of a midpoint between two consecutive floats. You can do some error analysis, get a Diophantine equation, find all of the solutions of that Diophantine equation, find which inputs they correspond to, and work out what the algorithm does on those. (If you do this, there is one "physical" solution and a bunch of "unphysical" solutions. The one real solution is the only thing I special-cased.) There may be a cleaner way, however.

implementing a simple big bang big crunch (BB-BC) in matlab

i want to implement a simple BB-BC in MATLAB but there is some problem.
here is the code to generate initial population:
pop = zeros(N,m);
for j = 1:m
% formula used to generate random number between a and b
% a + (b-a) .* rand(N,1)
pop(:,j) = const(j,1) + (const(j,2) - const(j,1)) .* rand(N,1);
end
const is a matrix (mx2) which holds constraints for control variables. m is number of control variables. random initial population is generated.
here is the code to compute center of mass in each iteration
sum = zeros(1,m);
sum_f = 0;
for i = 1:N
f = fitness(new_pop(i,:));
%keyboard
sum = sum + (1 / f) * new_pop(i,:);
%keyboard
sum_f = sum_f + 1/f;
%keyboard
end
CM = sum / sum_f;
new_pop holds newly generated population at each iteration, and is initialized with pop.
CM is a 1xm matrix.
fitness is a function to give fitness value for each particle in generation. lower the fitness, better the particle.
here is the code to generate new population in each iteration:
for i=1:N
new_pop(i,:) = CM + rand(1) * alpha1 / (n_itr+1) .* ( const(:,2)' - const(:,1)');
end
alpha1 is 0.9.
the problem is that i run the code for 100 iterations, but fitness just decreases and becomes negative. it shouldnt happen at all, because all particles are in search space and CM should be there too, but it goes way beyond the limits.
for example, if this is the limits (m=4):
const = [1 10;
1 9;
0 5;
1 4];
then running yields this CM:
57.6955 -2.7598 15.3098 20.8473
which is beyond all limits.
i tried limiting CM in my code, but then it just goes and sticks at all top boundaries, which in this example give CM=
10 9 5 4
i am confused. there is something wrong in my implementation or i have understood something wrong in BB-BC?

generate random numbers within a range with different probabilities

How can i generate a random number between A = 1 and B = 10 where each number has a different probability?
Example: number / probability
1 - 20%
2 - 20%
3 - 10%
4 - 5%
5 - 5%
...and so on.
I'm aware of some hard-coded workarounds which unfortunately are of no use with larger ranges, for example A = 1000 and B = 100000.
Assume we have a
Rand()
method which returns a random number R, 0 < R < 1, can anyone post a code sample with a proper way of doing this ? prefferable in c# / java / actionscript.
Build an array of 100 integers and populate it with 20 1's, 20 2's, 10 3's, 5 4's, 5 5's, etc. Then just randomly pick an item from the array.
int[] numbers = new int[100];
// populate the first 20 with the value '1'
for (int i = 0; i < 20; ++i)
{
numbers[i] = 1;
}
// populate the rest of the array as desired.
// To get an item:
// Since your Rand() function returns 0 < R < 1
int ix = (int)(Rand() * 100);
int num = numbers[ix];
This works well if the number of items is reasonably small and your precision isn't too strict. That is, if you wanted 4.375% 7's, then you'd need a much larger array.
There is an elegant algorithm attributed by Knuth to A. J. Walker (Electronics Letters 10, 8 (1974), 127-128; ACM Trans. Math Software 3 (1977), 253-256).
The idea is that if you have a total of k * n balls of n different colors, then it is possible to distribute the balls in n containers such that container no. i contains balls of color i and at most one other color. The proof is by induction on n. For the induction step pick the color with the least number of balls.
In your example n = 10. Multiply the probabilities with a suitable m such that they are all integers. So, maybe m = 100 and you have 20 balls of color 0, 20 balls of color 1, 10 balls of color 2, 5 balls of color 3, etc. So, k = 10.
Now generate a table of dimension n with each entry being a probability (the ration of balls of color i vs the other color) and the other color.
To generate a random ball, generate a random floating-point number r in the range [0, n). Let i be the integer part (floor of r) and x the excess (r – i).
if (x < table[i].probability) output i
else output table[i].other
The algorithm has the advantage that for each random ball you only make a single comparison.
Let me work out an example (same as Knuth).
Consider simulating throwing a pair of dice.
So P(2) = 1/36, P(3) = 2/36, P(4) = 3/36, P(5) = 4/36, P(6) = 5/36, P(7) = 6/36, P(8) = 5/36, P(9) = 4/36, P(10) = 3/36, P(11) = 2/36, P(12) = 1/36.
Multiply by 36 * 11 to get 393 balls, 11 of color 2, 22 of color 3, 33 of color 4, …, 11 of color 12.
We have k = 393 / 11 = 36.
Table[2] = (11/36, color 4)
Table[12] = (11/36, color 10)
Table[3] = (22/36, color 5)
Table[11] = (22/36, color 5)
Table[4] = (8/36, color 9)
Table[10] = (8/36, color 6)
Table[5] = (16/36, color 6)
Table[9] = (16/36, color 8)
Table[6] = (7/36, color 8)
Table[8] = (6/36, color 7)
Table[7] = (36/36, color 7)
Assuming that you have a function p(n) that gives you the desired probability for a random number:
r = rand() // a random number between 0 and 1
for i in A to B do
if r < p(i)
return i
r = r - p(i)
done
A faster way is to create an array of (B - A) * 100 elements and populate it with numbers from A to B such that the ratio of the number of each item occurs in the array to the size of the array is its probability. You can then generate a uniform random number to get an index to the array and directly access the array to get your random number.
Map your uniform random results to the required outputs according to the probabilities.
E.g., for your example:
If `0 <= Round() <= 0.2`: result = 1.
If `0.2 < Round() <= 0.4`: result = 2.
If `0.4 < Round() <= 0.5`: result = 3.
If `0.5 < Round() <= 0.55`: result = 4.
If `0.55 < Round() <= 0.65`: result = 5.
...
Here's an implementation of Knuth's Algorithm. As discussed by some of the answers it works by
1) creating a table of summed frequencies
2) generates a random integer
3) rounds it with ceiling function
4) finds the "summed" range within which the random number falls and outputs original array entity based on it
Inverse Transform
In probability speak, a cumulative distribution function F(x) returns the probability that any randomly drawn value, call it X, is <= some given value x. For instance, if I did F(4) in this case, I would get .6. because the running sum of probabilities in your example is {.2, .4, .5, .55, .6, .65, ....}. I.e. the probability of randomly getting a value less than or equal to 4 is .6. However, what I actually want to know is the inverse of the cumulative probability function, call it F_inv. I want to know what is the x value given the cumulative probability. I want to pass in F_inv(.6) and get back 4. That is why this is called the inverse transform method.
So, in the inverse transform method, we are basically trying to find the interval in the cumulative distribution in which a random Uniform (0,1) number falls. This works out to the algorithm that perreal and icepack posted. Here is another way to state it in terms of the cumulative distribution function
Generate a random number U
for x in A .. B
if U <= F(x) then return x
Note that it might be more efficient to have the loop go from B to A and check if U >= F(x) if the smaller probabilities come at the beginning of the distribution

Randomly Generate a set of numbers of n length totaling x

I'm working on a project for fun and I need an algorithm to do as follows:
Generate a list of numbers of Length n which add up to x
I would settle for list of integers, but ideally, I would like to be left with a set of floating point numbers.
I would be very surprised if this problem wasn't heavily studied, but I'm not sure what to look for.
I've tackled similar problems in the past, but this one is decidedly different in nature. Before I've generated different combinations of a list of numbers that will add up to x. I'm sure that I could simply bruteforce this problem but that hardly seems like the ideal solution.
Anyone have any idea what this may be called, or how to approach it? Thanks all!
Edit: To clarify, I mean that the list should be length N while the numbers themselves can be of any size.
edit2: Sorry for my improper use of 'set', I was using it as a catch all term for a list or an array. I understand that it was causing confusion, my apologies.
This is how to do it in Python
import random
def random_values_with_prescribed_sum(n, total):
x = [random.random() for i in range(n)]
k = total / sum(x)
return [v * k for v in x]
Basically you pick n random numbers, compute their sum and compute a scale factor so that the sum will be what you want it to be.
Note that this approach will not produce "uniform" slices, i.e. the distribution you will get will tend to be more "egalitarian" than it should be if it was picked at random among all distribution with the given sum.
To see the reason you can just picture what the algorithm does in the case of two numbers with a prescribed sum (e.g. 1):
The point P is a generic point obtained by picking two random numbers and it will be uniform inside the square [0,1]x[0,1]. The point Q is the point obtained by scaling P so that the sum is required to be 1. As it's clear from the picture the points close to the center of the have an higher probability; for example the exact center of the squares will be found by projecting any point on the diagonal (0,0)-(1,1), while the point (0, 1) will be found projecting only points from (0,0)-(0,1)... the diagonal length is sqrt(2)=1.4142... while the square side is only 1.0.
Actually, you need to generate a partition of x into n parts. This is usually done the in following way: The partition of x into n non-negative parts can be represented in the following way: reserve n + x free places, put n borders to some arbitrary places, and stones to the rest. The stone groups add up to x, thus the number of possible partitions is the binomial coefficient (n + x \atop n).
So your algorithm could be as follows: choose an arbitrary n-subset of (n + x)-set, it determines uniquely a partition of x into n parts.
In Knuth's TAOCP the chapter 3.4.2 discusses random sampling. See Algortihm S there.
Algorithm S: (choose n arbitrary records from total of N)
t = 0, m = 0;
u = random, uniformly distributed on (0, 1)
if (N - t)*u >= n - m, skip t-th record and increase t by 1; otherwise include t-th record in the sample, increase m and t by 1
if M < n, return to 2, otherwise, algorithm finished
The solution for non-integers is algorithmically trivial: you just select arbitrary n numbers that don't sum up to 0, and norm them by their sum.
If you want to sample uniformly in the region of N-1-dimensional space defined by x1 + x2 + ... + xN = x, then you're looking at a special case of sampling from a Dirichlet distribution. The sampling procedure is a little more involved than generating uniform deviates for the xi. Here's one way to do it, in Python:
xs = [random.gammavariate(1,1) for a in range(N)]
xs = [x*v/sum(xs) for v in xs]
If you don't care too much about the sampling properties of your results, you can just generate uniform deviates and correct their sum afterwards.
Here is a version of the above algorithm in Javascript
function getRandomArbitrary(min, max) {
return Math.random() * (max - min) + min;
};
function getRandomArray(min, max, n) {
var arr = [];
for (var i = 0, l = n; i < l; i++) {
arr.push(getRandomArbitrary(min, max))
};
return arr;
};
function randomValuesPrescribedSum(min, max, n, total) {
var arr = getRandomArray(min, max, n);
var sum = arr.reduce(function(pv, cv) { return pv + cv; }, 0);
var k = total/sum;
var delays = arr.map(function(x) { return k*x; })
return delays;
};
You can call it with
var myarray = randomValuesPrescribedSum(0,1,3,3);
And then check it with
var sum = myarray.reduce(function(pv, cv) { return pv + cv;},0);
This code does a reasonable job. I think it produces a different distribution than 6502's answer, but I am not sure which is better or more natural. Certainly his code is clearer/nicer.
import random
def parts(total_sum, num_parts):
points = [random.random() for i in range(num_parts-1)]
points.append(0)
points.append(1)
points.sort()
ret = []
for i in range(1, len(points)):
ret.append((points[i] - points[i-1]) * total_sum)
return ret
def test(total_sum, num_parts):
ans = parts(total_sum, num_parts)
assert abs(sum(ans) - total_sum) < 1e-7
print ans
test(5.5, 3)
test(10, 1)
test(10, 5)
In python:
a: create a list of (random #'s 0 to 1) times total; append 0 and total to the list
b: sort the list, measure the distance between each element
c: round the list elements
import random
import time
TOTAL = 15
PARTS = 4
PLACES = 3
def random_sum_split(parts, total, places):
a = [0, total] + [random.random()*total for i in range(parts-1)]
a.sort()
b = [(a[i] - a[i-1]) for i in range(1, (parts+1))]
if places == None:
return b
else:
b.pop()
c = [round(x, places) for x in b]
c.append(round(total-sum(c), places))
return c
def tick():
if info.tick == 1:
start = time.time()
alpha = random_sum_split(PARTS, TOTAL, PLACES)
end = time.time()
log('alpha: %s' % alpha)
log('total: %.7f' % sum(alpha))
log('parts: %s' % PARTS)
log('places: %s' % PLACES)
log('elapsed: %.7f' % (end-start))
yields:
[2014-06-13 01:00:00] alpha: [0.154, 3.617, 6.075, 5.154]
[2014-06-13 01:00:00] total: 15.0000000
[2014-06-13 01:00:00] parts: 4
[2014-06-13 01:00:00] places: 3
[2014-06-13 01:00:00] elapsed: 0.0005839
to the best of my knowledge this distribution is uniform

Resources