I am trying to write a little function to do stratified random sampling. That is, I have a vector of group membership for each element, and I want to select a single element (index) for each group. Thus, the input is the number of elements desired, and the group membership for each element. The output is a list of indices.
Here's the function I have:
function stratified_sample(n::Int64, groups::Array{Int64})
# the output vector of indices
ind = zeros(Int64, n)
# first select n groups from the total set of possible groups
group_samp = sample(unique(groups), n, replace = false)
# cycle through the selected groups
for i in 1:n
# for each group, select one index whose group matches the current target group
ind[i] = sample([1:length(groups)...][groups.==group_samp[i]], 1, replace = false)[1]
end
# return the indices
return ind
end
When I run this code on a relatively large vector, for example, 1000 different groups and 40000 total entries, I get
julia> groups = sample(1:1000, 40000, replace = true)
40000-element Array{Int64,1}:
221
431
222
421
714
108
751
259
⋮
199
558
317
848
271
358
julia> #time stratified_sample(5, groups)
0.022951 seconds (595.06 k allocations: 19.888 MiB)
5-element Array{Int64,1}:
11590
17057
17529
25103
20651
And to compare this to normal random sampling of five elements from 40000 possible:
julia> #time sample(1:40000, 5, replace = false)
0.000005 seconds (5 allocations: 608 bytes)
5-element Array{Int64,1}:
38959
5850
3283
19779
30063
So my code is running nearly 50k times slower and using up 33k times more memory! What in world have I done wrong, and is there a way to speed up this code? My guess is the real slow down is happening in the subsetting step, i.e., [1:length(groups)...][groups.==group_samp[i]], but I can't find a better solution.
I've searched endlessly for this function within standard Julia packages, but no luck.
Any suggestions?
EDIT: I've been able to speed it up a lot by just taking a random sample, and checking to see if it satisfies the requirement that there are n unique groups selected:
function stratified_sample_random(n::Int64, groups::Array{Int64}, group_probs::Array{Float32})
ind = zeros(Int64, n)
my_samp = []
while true
my_samp = wsample(1:length(groups), group_probs, n, replace = false)
if length(unique(groups[my_samp])) == n
break
end
end
return my_samp
end
Here, group_probs is just a vector of sampling probabilities, where each group's elements have a total probability of 1/s, where s is the number of elements in that group. For example, if groups = [1,1,1,1,2,3,3] the the corresponding probabilities would be group_probs = [0.25, 0.25, 0.25, 0.25, 1, 0.5, 0.5]. This helps speed up the sampling by minimizing the probability of selecting multiple items of one group. Overall it works fairly well:
#time stratified_sample_random(5, groups, group_probs)
0.000122 seconds (14 allocations: 1.328 KiB)
5-element Array{Int64,1}:
32209
10184
30892
4861
30300
From a bit of experimenting, weighted sampling by probability is not necessarily faster than standard sample(), but this depends on how many unique groups and what the desired n value is.
Of course, there's no guarantee this function will randomly sample a unique set of objects, and it could cycle forever. My thought is to add a counter to the while loop, and if it's tried for something like 10000 times with no luck, then it will call the original stratified_sample function I presented, to ensure it returns a unique result. I don't love this solution, and there must be a more elegant and parsimonious approach, but it's definitely an improvement.
Here, [1:length(groups)...], you are splatting and allocating a 40000 element array n times, you should avoid this. Here is a 33x faster version using a range inds instead. Knowing the real application, though, we could still come up with a way faster method.
function stratified_sample(n::Int64, groups::Array{Int64})
# the output vector of indices
ind = zeros(Int64, n)
# first select n groups from the total set of possible groups
group_samp = sample(unique(groups), n, replace = false)
inds = 1:length(groups)
# cycle through the selected groups
for i in 1:n
# for each group, select one index whose group matches the current target group
ind[i] = sample(inds[groups.==group_samp[i]], 1, replace = false)[1]
end
# return the indices
return ind
end
Related
Let's consider a very large numpy array a (M, N).
where M can typically be 1 or 100 and N 10-100,000,000
We have the array of indices that can split it into many (K = 1,000,000) along axis=1.
We want to efficiently perform an operation like integration along axis=1 (np.sum to take the simplest form) on each sub-array and return a (M, K) array.
An elegant and efficient solution was proposed by #Divakar in question [41920367]how to split numpy array and perform certain actions on split arrays [Python] but my understanding is that it only applies to cases where all sub-arrays have the same shape, which allows for reshaping.
But in our case the sub-arrays don't have the same shape, which, so far has forced me to loop on the index... please take me out of my misery...
Example
a = np.random.random((10, 100000000))
ind = np.sort(np.random.randint(10, 9000000, 1000000))
The size of the sub-arrays are not homogenous:
sizes = np.diff(ind)
print(sizes.min(), size.max())
2, 8732
So far, the best I found is:
output = np.concatenate([np.sum(vv, axis=1)[:, None] for vv in np.split(a, ind, axis=1)], axis=1)
Possible feature request for numpy and scipy:
If looping is really unavoidable, at least having it done in C inside the numpy and scipy.integrate.simps (or romb) functions would probably speed-up the output.
Something like
output = np.sum(a, axis=1, split_ind=ind)
output = scipy.integrate.simps(a, x=x, axis=1, split_ind=ind)
output = scipy.integrate.romb(a, x=x, axis=1, split_ind=ind)
would be very welcome !
(where x itself could be splitable, or not)
Side note:
While trying this example, I noticed that with these numbers there was almost always an element of sizes equal to 0 (the sizes.min() is almost always zero).
This looks peculiar to me, as we are picking 10,000 integers between 10 and 9,000,000, the odds that the same number comes up twice (such that diff = 0) should be close to 0. It seems to be very close to 1.
Would that be due to the algorithm behind np.random.randint ?
What you want is np.add.reduceat
output = np.add.reduceat(a, ind, axis = 1)
output.shape
Out[]: (10, 1000000)
Universal Functions (ufunc) are a very powerful tool in numpy
As for the repeated indices, that's simply the Birthday Problem cropping up.
Great !
Thanks ! on my VM Cent OS 6.9 I have the following results:
In [71]: a = np.random.random((10, 10000000))
In [72]: ind = np.unique(np.random.randint(10, 9000000, 100000))
In [73]: ind2 = np.append([0], ind)
In [74]: out = np.concatenate([np.sum(vv, axis=1)[:, None] for vv in np.split(a, ind, axis=1)], axis=1)
In [75]: out2 = np.add.reduceat(a, ind2, axis=1)
In [83]: np.allclose(out, out2)
Out[83]: True
In [84]: %timeit out = np.concatenate([np.sum(vv, axis=1)[:, None] for vv in np.split(a, ind, axis=1)], axis=1)
2.7 s ± 40.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [85]: %timeit out2 = np.add.reduceat(a, ind2, axis=1)
179 ms ± 15.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
That's a good 93 % speed gain (or factor 15 faster) over the list concatenation :-)
Great !
Im trying to construct and compare, the fastest possible way, two 01 random vectors of the same length using Julia, each vector with the same number of zeros and ones.
This is all for a MonteCarlo simulation of the following probabilistic question
We have two independent urns, each one with n white balls and n black balls. Then we take a pair of balls, one of each urn, each time up to empty the urns. What is the probability that each pair have the same color?
What I did is the following:
using Random
# Auxiliar function that compare the parity, element by element, of two
# random vectors of length 2n
function comp(n::Int64)
sum((shuffle!(Vector(1:2*n)) .+ shuffle!(Vector(1:2*n))).%2)
end
The above generate two random permutations of the vector from 1 to 2n, add element by element, apply modulo 2 to each elemnt and after sum all the values of the remaining vector. Then Im using above the parity of each number to model it color: odd black and white even.
If the final sum is zero then the two random vectors had the same colors, element by element. A different result says that the two vectors doesnt had paired colors.
Then I setup the following function, that it is just the MonteCarlo simulation of the desired probability:
# Here m is an optional argument that control the amount of random
# experiments in the simulation
function sim(n::Int64,m::Int64=24)
# A counter for the valid cases
x = 0
for i in 1:2^m
# A random pair of vectors is a valid case if they have the
# the same parity element by element so
if comp(n) == 0
x += 1
end
end
# The estimated value
x/2^m
end
Now I want to know if there is a faster way to compare such vectors. I tried the following alternative construction and comparison for the random vectors
shuffle!( repeat([0,1],n)) == shuffle!( repeat([0,1],n))
Then I changed accordingly the code to
comp(n)
With these changes the code runs slightly slower, what I tested with the function #time. Other changes that I did was changing the forstatement for a whilestatement, but the computation time remain the same.
Because Im not programmer (indeed just yesterday I learn something of the Julia language, and installed the Juno front-end) then probably will be a faster way to make the same computations. Some tip will be appreciated because the effectiveness of a MonteCarlo simulation depends on the number of random experiments, so the faster the computation the larger values we can test.
The key cost in this problem is shuffle! therefore in order to maximize the simulation speed you can use (I add it as an answer as it is too long for a comment):
function test(n,m)
ref = [isodd(i) for i in 1:2n]
sum(all(view(shuffle!(ref), 1:n)) for i in 1:m) / m
end
What are the differences from the code proposed in the other answer:
You do not have to shuffle! both vectors; it is enough to shuffle! one of them, as the result of the comparison is invariant to any identical permutation of both vectors after independently shuffling them; therefore we can assume that one vector is after random permutation reshuffled to be ordered so that it has trues in the first n entries and falses in the last n entries
I do shuffle! in-place (i.e. ref vector is allocated only once)
I use all function on the fist half of the vector; this way the check is stopped as I hit first false; if I hit all true in the first n entries I do not have to check the last n entries as I know they are all false so I do not have to check them
To get something cleaner, you could generate directly vectors of 0/1 values, and then just let Julia check for vector equality, e.g.
function rndvec(n::Int64)
shuffle!(vcat(zeros(Bool,n),ones(Bool,n)))
end
function sim0(n::Int64, m::Int64=24)
sum(rndvec(n) == rndvec(n) for i in 1:2^m) / 2^m
end
Avoiding allocation makes the code faster, as explained by Bogumił Kamiński (and letting Julia make the comparison is faster than his code).
function sim1(n::Int64, m::Int64=24)
vref = vcat(zeros(Bool,n),ones(Bool,n))
vshuffled = vref[:]
sum(shuffle!(vshuffled) == vref for i in 1:2^m) / 2^m
end
To go even faster use lazy evaluation and fast exit: if the first element is different, you don't even need to generate the rest of the vectors.
This would make the code much trickier though.
I find it's a bit not in the spirit of the question, but you could also do some more math.
There is binomial(2*n, n) possible vectors generated and you could therefore just compute
function sim2(n::Int64, m::Int64=24)
nvec = binomial(2*n, n)
sum(rand(1:nvec) == 1 for i in 1:2^m) / 2^m
end
Here are some timings I obtain:
#time show(("sim0", sim0(6, 21)))
#time show(("sim1", sim1(6, 21)))
#time show(("sim2", sim2(6, 21)))
#time test(("test", test(6, 2^21)))
("sim0", 0.0010724067687988281) 4.112159 seconds (12.68 M allocations: 1.131 GiB, 11.47% gc time)
("sim1", 0.0010781288146972656) 0.916075 seconds (19.87 k allocations: 1.092 MiB)
("sim2", 0.0010628700256347656) 0.249432 seconds (23.12 k allocations: 1.258 MiB)
("test", 0.0010166168212890625) 1.180781 seconds (2.14 M allocations: 98.634 MiB, 2.22% gc time)
I'm doing MC simulations and I need to generate random integers within a range between 1 and a variable upper limit n_mol
The specific Julia function for doing this is rand(1:n_mol) where n_mol is an integer that changes with every MC iteration. The problem is that doing it this is slow... (possibly an issue to open for Julia developers). So, instead of using that particular function call, I thought about generating a random float in [0,1) multiply it by n_mol and then get the integer part of the result: int(rand()*n_mol) the problem now is that int() rounds up so I could end up with numbers between 0 and n_mol, and I can't get 0... so the solution I'm using for the moment is using ifloor and add a 1, ifloor(rand()*n_mol)+1, which considerably faster that the first, but slower than the second.
function t1(N,n_mol)
for i = 1:N
rand(1:n_mol)
end
end
function t2(N,n_mol)
for i = 1:N
int(rand()*n_mol)
end
end
function t3(N,n_mol)
for i = 1:N
ifloor(rand()*n_mol)+1
end
end
#time t1(1e8,123456789)
#time t2(1e8,123456789)
#time t3(1e8,123456789)
elapsed time: 3.256220849 seconds (176 bytes allocated)
elapsed time: 0.482307467 seconds (176 bytes allocated)
elapsed time: 0.975422095 seconds (176 bytes allocated)
So, is there any way of doing this faster with speeds near the second test?
It's important because the MC simulation goes for more than 1e10 iterations.
The result has to be an integer because it will be used as an index of an array.
The rand(r::Range) code is quite fast, given the following two considerations. First, julia calls a 52 bit rng twice to obtain random integers and a 52 bit rng once to obtain random floats, that gives with some book keeping a factor 2.5. A second thing is that
(rand(Uint) % k)
is only evenly distributed between 0 to k-1, if k is a power of 2. This is taken care of with rejection sampling, this explains more or less the remaining additional cost.
If speed is extremely important you can use a simpler random number generator as Julia and ignore those issues. For example with a linear congruential generator without rejection sampling
function lcg(old)
a = unsigned(2862933555777941757)
b = unsigned(3037000493)
a*old + b
end
function randfast(k, x::Uint)
x = lcg(x)
1 + rem(x, k) % Int, x
end
function t4(N, R)
state = rand(Uint)
for i = 1:N
x, state = randfast(R, state)
end
end
But be careful, if the range is (really) big.
m = div(typemax(Uint),3)*2
julia> mean([rand(1:m)*1.0 for i in 1:10^7])
6.148922790091841e18
julia> m/2
6.148914691236517e18
but (!)
julia> mean([(rand(Uint) % m)*1.0 for i in 1:10^7])
5.123459611164573e18
julia> 5//12*tm
5.124095576030431e18
Note that in 0.4, int() is deprecated, and you're aske to use round() instead.
function t2(N,n_mol)
for i = 1:N
round(rand()*n_mol)
end
end
gives 0.27 seconds on my machine (using Julia 0.4).
I'm searching for an algorithm (no matter what programming language, maybe Pseudo-code?) where you get a random number with different probability's.
For example:
A random Generator, which simulates a dice where the chance for a '6'
is 50% and for the other 5 numbers it's 10%.
The algorithm should be scalable, because this is my exact problem:
I have a array (or database) of elements, from which i want to
select 1 random element. But each element should have a different
probability to be selected. So my idea is that every element get a
number. And this number divided by the sum of all numbers results the
chance for the number to be randomly selected.
Anybody know a good programming language (or library) for this problem?
The best solution would be a good SQL Query which delivers 1 random entry.
But i would also be happy with every hint or attempt in an other programming language.
A simple algorithm to achieve it is:
Create an auexillary array where sum[i] = p1 + p2 + ... + pi. This is done only once.
When you draw a number, draw a number r with uniform distribution over [0,sum[n]), and binary search for the first number higher than the uniformly distributed random number. It can be done using binary search efficiently.
It is easy to see that indeed the probability for r to lay in a certain range [sum[i-1],sum[i]), is indeed sum[i]-sum[i-1] = pi
(In the above, we regard sum[-1]=0, for completeness)
For your cube example:
You have:
p1=p2=....=p5 = 0.1
p6 = 0.5
First, calculate sum array:
sum[1] = 0.1
sum[2] = 0.2
sum[3] = 0.3
sum[4] = 0.4
sum[5] = 0.5
sum[6] = 1
Then, each time you need to draw a number: Draw a random number r in [0,1), and choose the number closest to it, for example:
r1 = 0.45 -> element = 4
r2 = 0.8 -> element = 6
r3 = 0.1 -> element = 2
r4 = 0.09 -> element = 1
An alternative answer. Your example was in percentages, so set up an array with 100 slots. A 6 is 50%, so put 6 in 50 of the slots. 1 to 5 are at 10% each, so put 1 in 10 slots, 2 in 10 slots etc. until you have filled all 100 slots in the array. Now pick one of the slots at random using a uniform distribution in [0, 99] or [1, 100] depending on the language you are using.
The contents of the selected array slot will give you the distribution you want.
ETA: On second thoughts, you don't actually need the array, just use cumulative probabilities to emulate the array:
r = rand(100) // In range 0 -> 99 inclusive.
if (r < 50) return 6; // Up to 50% returns a 6.
if (r < 60) return 1; // Between 50% and 60% returns a 1.
if (r < 70) return 2; // Between 60% and 70% returns a 2.
etc.
You already know what numbers are in what slots, so just use cumulative probabilities to pick a virtual slot: 50; 50 + 10; 50 + 10 + 10; ...
Be careful of edge cases and whether your RNG is 0 -> 99 or 1 -> 100.
I'm trying to make a randomizer that will use the Monte Carlo Hit or Miss Simulation.
I have a Key-Value pair that represents the ID and the probability value:
ID - Value
2 - 0.37
1 - 0.35
4 - 0.14
3 - 0.12
When you add all of those values, you will get a total of 1.0.
You can imagine those values as the total area of a "slice" on the "wheel" (EG: ID 2 occupies 37% of the wheel, while ID 3 only occupies 12% of the wheel). When converted to "range" it will look like this:
ID - Value - Range
2 - 0.37 - 0 to 37
1 - 0.35 - 37 to 72
4 - 0.14 - 72 to 86
3 - 0.12- 86 to 100
Now, I am using Random.NextDouble() to generate a random value that is between 0.0 and 1.0. That random value will be considered as the "spin" on the wheel. Say, the randomizer returns 0.35, then ID 2 will be selected.
What is the best way to implement this given that I have an array of doubles?
The simplest solutions are often the best, if your range is 0 - 100 by design (or another manageebly small number), you can allocate an int[] and use the table of ranges you created to fill in the ID at the corresponding index, your "throw" will then look like:
int randomID = rangesToIDs[random.nextInt(rangesToIDs.length)];
Btw, it is not necessary to sort the ID's on range size, as the randoms are assumed to be distributed uniformly it does not matter where in the lookup table a range is placed. It only matters that the number of entries is proportional to the chance to throw an ID.
Let's assume your initial data is represented as array D[n], where D[i] = (id, p) and sum(D[i].p for i=0..n-1) == 1.
Build a second array P[n] such that P[i] = (q, id): P[i] = (sum(D[j].p for j in 0..i), D[j].id) -- i.e., convert individual probablity of each slice i into cumulative probability of all slices preceding i (inclusive). Note that, by definition, this array P is ordered by field q (i.e. by cumulative probability).
Now you can use binary search to find the slice chosen by the random number r (0 <= r <= 1):
find highest i such that P[i].q <= r; then P[i].id is your slice.
It is possible to speed up the lookup further by hashing the probability range with a fixed grid. I can write more details on this if anybody is interested.
As jk wrote sorted dictionary of should be fine.
let's say you got dictionary like this:
0.37 2
0.72 1
0.86 4
1.00 3
You roll xx = 0.66..
Iterate through dictionary starting from lowest number (that's 0.37)
if xx < dict[i].key
return dict[i].value
Or another solution which comes to my mind is List of custom objects containing lower and upper bound and value. You iterate then through list and check if rolled number is in range of up and low bounds.
a sorted map/dictionary with the 'Value' as the key and the 'ID' as the value would allow you to quickly find the upper bound of the range you are in and then look up the ID for that range
assuming your dictionary allows it, a binary search would be better to find the upper bound than interating throught the entire dictionary
boundaries = [37, 72, 86, 100]
num = 100 * random
for i in boundaries:
if num < i then return i