Generate a random interval where probability of n appearing is 1/n - algorithm

Let's say we have a news website with 100 pages each displaying several articles, and we want to parse regularly the website to keep statistics on the number of commentaries per article.
The number of commentaries on a article will change rapidly on new articles (so on the first pages), and really slowly on the very old article (on the last pages).
So I will want to parse the first pages way more often than the last pages.
A solution to this problem I imagined would be each time to generate an interval of the pages we want to parse, with the additional requirement that n in this interval would have a probability 1/n of appearing.
For example, we would parse the page 1 every time.
The page 2 would appear in the interval half of the time.
The page 3, 1/3 of the time...
Our algorithm would then generate the 'interval' [1,1] most of the time. The interval [1,2] would be less likely, [1,3] even less ... and [1,100] would be really rare.
Do you see a way to implement this algorithm with the usual random function of most of the languages ?
Is there another way to solve the problem (parse more often the recent content on a website) making more sense ?
Thanks for your help.
edit:
Here is an implementation in Python based on the answer provided by #david-eisenstat.
I tried to implement the version with random() generating integers, but I obtain strange results.
# return a number between 1 and n
def randPage(n):
while True:
r = floor(1 / (1 - random()))
if r <= n:
return r

If you have a function random() that returns doubles in the interval [0, 1), then you look at pages 1 to floor(1 / (1 - random())). Page n is examined if and only if the output of random() is in the interval [1 - 1/n, 1), which has length 1/n.
If you're using an integer random() function in the interval [0, RAND_MAX], then let k = random() and look at RAND_MAX / k pages if k != 0 or all of them if k == 0.

Related

Best way to distribute a given resource (eg. budget) for optimal output

I am trying to find a solution in which a given resource (eg. budget) will be best distributed to different options which yields different results on the resource provided.
Let's say I have N = 1200 and some functions. (a, b, c, d are some unknown variables)
f1(x) = a * x
f2(x) = b * x^c
f3(x) = a*x + b*x^2 + c*x^3
f4(x) = d^x
f5(x) = log x^d
...
And also, let's say there n number of these functions that yield different results based on its input x, where x = 0 or x >= m, where m is a constant.
Although I am not able to find exact formula for the given functions, I am able to find the output. This means that I can do:
X = f1(N1) + f2(N2) + f3(N3) + ... + fn(Nn) where (N1 + ... Nn) = N as many times as there are ways of distributing N into n numbers, and find a specific case where X is the greatest.
How would I actually go about finding the best distribution of N with the least computation power, using whatever libraries currently available?
If you are happy with allocations constrained to be whole numbers then there is a dynamic programming solution of cost O(Nn) - so you can increase accuracy by scaling if you want, but this will increase cpu time.
For each i=1 to n maintain an array where element j gives the maximum yield using only the first i functions giving them a total allowance of j.
For i=1 this is simply the result of f1().
For i=k+1 consider when working out the result for j consider each possible way of splitting j units between f_{k+1}() and the table that tells you the best return from a distribution among the first k functions - so you can calculate the table for i=k+1 using the table created for k.
At the end you get the best possible return for n functions and N resources. It makes it easier to find out what that best answer is if you maintain of a set of arrays telling the best way to distribute k units among the first i functions, for all possible values of i and k. Then you can look up the best allocation for f100(), subtract off the value this allocated to f100() from N, look up the best allocation for f99() given the resulting resources, and carry on like this until you have worked out the best allocations for all f().
As an example suppose f1(x) = 2x, f2(x) = x^2 and f3(x) = 3 if x>0 and 0 otherwise. Suppose we have 3 units of resource.
The first table is just f1(x) which is 0, 2, 4, 6 for 0,1,2,3 units.
The second table is the best you can do using f1(x) and f2(x) for 0,1,2,3 units and is 0, 2, 4, 9, switching from f1 to f2 at x=2.
The third table is 0, 3, 5, 9. I can get 3 and 5 by using 1 unit for f3() and the rest for the best solution in the second table. 9 is simply the best solution in the second table - there is no better solution using 3 resources that gives any of them to f(3)
So 9 is the best answer here. One way to work out how to get there is to keep the tables around and recalculate that answer. 9 comes from f3(0) + 9 from the second table so all 3 units are available to f2() + f1(). The second table 9 comes from f2(3) so there are no units left for f(1) and we get f1(0) + f2(3) + f3(0).
When you are working the resources to use at stage i=k+1 you have a table form i=k that tells you exactly the result to expect from the resources you have left over after you have decided to use some at stage i=k+1. The best distribution does not become incorrect because that stage i=k you have worked out the result for the best distribution given every possible number of remaining resources.

How can I acquire a certain "random range" in a higher frequency?

My question is basically, "how can I obtain certain random values within a specific range more than random values outside the range?"
Allow me to demonstrate what I mean:
If I were to, on a good amount of trials, start picking a variety of
random numbers from 1-10, I should be seeing more numbers in the 7-10
range than in the 1-6 range.
I tried a couple of ways, but I am not getting desirable results.
First Function:
function getAverage(i)
math.randomseed(os.time())
local sum = 0;
for j = 1,i do
sum = sum + (1-math.random()^3)*10
end
print(sum/i)
end
getAverage(500)
I was constantly getting numbers only around 7.5, such as 7.48, and 7.52. Although this does indeed get me a number within my range, I don't want such strict consistancy.
Second Function:
function getAverage(i)
math.randomseed(os.time())
local sum = 0;
for j = 1,i do
sum = sum + (math.random() > .3 and math.random(7,10) or math.random(1,6))
end
print(sum/i)
end
getAverage(500)
This function didn't work as I wanted it to either. I primarily getting numbers such as 6.8 and 7.2 but nothing even close to 8.
Third Function:
function getAverage(i)
math.randomseed(os.time())
local sum = 0;
for j = 1,i do
sum = sum + (((math.random(10) * 2)/1.2)^1.05) - math.random(1,3)
end
print(sum/i)
end
getAverage(500)
This function was giving me slightly more favorable results, with the function consistently returning 8, but that is the issue - consistency.
What type of paradigms or practical solutions can I use to generate more random numbers within a specific range over another range?
I have labeled this as Lua, but a solution in any language that is understandable is acceptable.
I don't want such strict consistancy.
What does that mean?
If you average a very large number of values in a given range from any RNG, you should expect that to produce the same number. That means each of the numbers in the range was equally likely to appear.
This function didn't work as I wanted it to either. I primarily getting numbers such as 6.8 and 7.2 but nothing even close to 8.
You have to clarify what "didn't work" means. Why would you expect it to give you 8? You can see it won't just by looking at the formula you used.
For instance, if you'd used math.random(1,10), assuming all numbers in the range have an equal chance of appearing, you should expect the average to be 5.5, dead in the middle of 1 and 10 (because (1+2+3+4+5+6+7+8+9+10)/10 = 5.5).
You used math.random() > .3 and math.random(7,10) or math.random(1,6) which is saying 70% of the time to give 7, 8, 9, or 10 (average = 8.5) and 30% of the time to give you 1, 2, 3, 4, 5, or 6 (average = 3.5). That should give you an overall average of 7 (because 3.5 * .3 + 8.5 * .7 = 7). If you bump up your sample size, that's exactly what you'll see. You're seeing values on either size because you sample size is so small (try bumping it up to 100000).
I've made skewed random values before by simply generating two random numbers in the range, and then picking the largest (or smallest). This skews the probability towards the high (or low) endpoint.
Picking the smallest of two gives you a linear probability distribution.
Picking the smallest of three gives you a parabolic distribution (more selectivity, less probability at "the other end"). For my needs, a linear distribution was fine.
Not exactly what you wanted, but maybe it's good enough.
Have fun!

Copying Books UVa Online Judge Dynamic Programing Solution

I can solve Copying Books Problem using binary search method as it is easy to implement. But I have just started solving Dynamic Programing problems and I wanted to know Dynamic Programing solution for the problem
Before the invention of book-printing, it was very hard to make a
copy of a book. All the contents had to be re-written by hand by so
called scribers. The scriber had been given a book and after several
months he finished its copy. One of the most famous scribers lived in
the 15th century and his name was Xaverius Endricus Remius Ontius
Xendrianus (Xerox). Anyway, the work was very annoying and boring. And
the only way to speed it up was to hire more scribers.
Once upon a time, there was a theater ensemble that wanted to play
famous Antique Tragedies. The scripts of these plays were divided into
many books and actors needed more copies of them, of course. So they
hired many scribers to make copies of these books. Imagine you have m
books (numbered 1, 2, ...., m) that may have different number of
pages ( p_1, p_2, ..., p_m) and you want to make one copy of each
of them. Your task is to divide these books among k scribes, k <=
m. Each book can be assigned to a single scriber only, and every
scriber must get a continuous sequence of books. That means, there
exists an increasing succession of numbers 0 = b_0 < b_1 < b_2, ...
< b_{k-1} <= b_k = m$ such that i-th scriber gets a sequence of books
with numbers between bi-1+1 and bi. The time needed to make a copy of
all the books is determined by the scriber who was assigned the most
work. Therefore, our goal is to minimize the maximum number of pages
assigned to a single scriber. Your task is to find the optimal
assignment.
For Binary Search I am doing the following.
Low =1 and High = Sum of pages of all books
Run Binary search
For Mid(Max pages assigned to a scribe), assign books greedily such that no scribe gets page more than MAX
If scribes remain without work it means actual value is less than MID, if Books remain actual pages is more MID and I am updating accordingly.
Here is a possible dynamic programming solution written in python. I use indexing starting from 0.
k = 2 # number of scribes
# number of pages per book. 11 pages for first book, 1 for second, etc.
pages = [11, 1, 1, 10, 1, 1, 3, 3]
m = len(pages) # number of books
def find_score(assignment):
max_pages = -1
for scribe in assignment:
max_pages = max(max_pages, sum([pages[book] for book in scribe]))
return max_pages
def find_assignment(assignment, scribe, book):
if book == m:
return find_score(assignment), assignment
assign_current = [x[:] for x in assignment] # deep copy
assign_current[scribe].append(book)
current = find_assignment(assign_current, scribe, book + 1)
if scribe == k - 1:
return current
assign_next = [x[:] for x in assignment] # deep copy
assign_next[scribe + 1].append(book)
next = find_assignment(assign_next, scribe + 1, book + 1)
return min(current, next)
initial_assignment = [[] for x in range(k)]
print find_assignment(initial_assignment, 0, 0)
The function find_assignment returns the assignment as a list where the ith element is a list of book indexes assigned to the ith scribe. The score of the assignment is returned as well (the max number of pages a scribe has to copy in the assignment).
The key to dynamic programming is to first identify the subproblem. In this case, the books are ordered and can only be assigned sequentially. Thus the subproblem is to find an optimal assignment for the last n books using s scribes (where n < m and s < k). A subproblem can be solved with smaller subproblems using the following relation: min(assigning book to the "current" scribe, assigning the book to the next scribe).

How do I get an unbiased random sample from a really huge data set?

For an application I'm working on, I need to sample a small set of values from a very large data set, on the order of few hundred taken from about 60 trillion (and growing).
Usually I use the technique of seeing if a uniform random number r (0..1) is less than S/T, where S is the number of sample items I still need, and T is the number of items in the set that I haven't considered yet.
However, with this new data, I don't have time to roll the die for each value; there are too many. Instead, I want to generate a random number of entries to "skip", pick the value at the next position, and repeat. That way I can just roll the die and access the list S times. (S is the size of the sample I want.)
I'm hoping there's a straightforward way to do that and create an unbiased sample, along the lines of the S/T test.
To be honest, approximately unbiased would be OK.
This is related (more or less a follow-on) to this persons question:
https://math.stackexchange.com/questions/350041/simple-random-sample-without-replacement
One more side question... the person who showed first showed this to me called it the "mailman's algorithm", but I'm not sure if he was pulling my leg. Is that right?
How about this:
precompute S random numbers from 0 to the size of your dataset.
order your numbers, low to high
store the difference between consecutive numbers as the skip size
iterate though the large dataset using the skip size above.
...The assumption being the order you collect the samples doesn't matter
So I thought about it, and got some help from http://math.stackexchange.com
It boils down to this:
If I picked n items randomly all at once, where would the first one land? That is, min({r_1 ... r_n}). A helpful fellow at math.stackexchange boiled it down to this equation:
x = 1 - (1 - r) ** (1 / n)
that is, the distribution would be 1 minus (1 - r) to the nth power. Then solve for x. Pretty easy.
If I generate a uniform random number and plug it in for r, this is distributed the same as min({r_1 ... r_n}) -- the same way that the lowest item would fall. Voila! I've just simulated picking the first item as if I had randomly selected all n.
So I skip over that many items in the list, pick that one, and then....
Repeat until n is 0
That way, if I have a big database (like Mongo), I can skip, find_one, skip, find_one, etc. Until I have all the items I need.
The only problem I'm having is that my implementation favors the first and last element in the list. But I can live with that.
In Python 2.7, my implementation looks like:
def skip(n):
"""
Produce a random number with the same distribution as
min({r_0, ... r_n}) to see where the next smallest one is
"""
r = numpy.random.uniform()
return 1.0 - (1.0 - r) ** (1.0 / n)
def sample(T, n):
"""
Take n items from a list of size T
"""
t = T
i = 0
while t > 0 and n > 0:
s = skip(n) * (t - n + 1)
i += s
yield int(i) % T
i += 1
t -= s + 1
n -= 1
if __name__ == '__main__':
t = [0] * 100
for c in xrange(10000):
for i in sample(len(t), 10):
t[i] += 1 # this is where we would read value i
pprint.pprint(t)

How to test if one set of (unique) integers belongs to another set, efficiently?

I'm writing a program where I'm having to test if one set of unique integers A belongs to another set of unique numbers B. However, this operation might be done several hundred times per second, so I'm looking for an efficient algorithm to do it.
For example, if A = [1 2 3] and B = [1 2 3 4], it is true, but if B = [1 2 4 5 6], it's false.
I'm not sure how efficient it is to just sort and compare, so I'm wondering if there are any more efficient algorithms.
One idea I came up with, was to give each number n their corresponding n'th prime: that is 1 = 2, 2 = 3, 3 = 5, 4 = 7 etc. Then I could calculate the product of A, and if that product is a factor of the similar product of B, we could say that A is a subset of similar B with certainty. For example, if A = [1 2 3], B = [1 2 3 4] the primes are [2 3 5] and [2 3 5 7] and the products 2*3*5=30 and 2*3*5*7=210. Since 210%30=0, A is a subset of B. I'm expecting the largest integer to be couple of million at most, so I think it's doable.
Are there any more efficient algorithms?
The asymptotically fastest approach would be to just put each set in a hash table and query each element, which is O(N) time. You cannot do better (since it will take that much time to read the data).
Most set datastructures already support expected and/or amortized O(1) query time. Some languages even support this operation. For example in python, you could just do
A < B
Of course the picture changes drastically depending on what you mean by "this operation is repeated". If you have the ability to do precalculations on the data as you add it to the set (which presumably you have the ability to do so), this will allow you to subsume the minimal O(N) time into other operations such as constructing the set. But we can't advise without knowing much more.
Assuming you had full control of the set datastructure, your approach to keep a running product (whenever you add an element, you do a single O(1) multiplication) is a very good idea IF there exists a divisibility test that is faster than O(N)... in fact your solution is really smart, because we can just do a single ALU division and hope we're within float tolerance. Do note however this will only allow you roughly a speedup factor of 20x max I think, since 21! > 2^64. There might be tricks to play with congruence-modulo-an-integer, but I can't think of any. I have a slight hunch though that there is no divisibility test that is faster than O(#primes), though I'd like to be proved wrong!
If you are doing this repeatedly on duplicates, you may benefit from caching depending on what exactly you are doing; give each set a unique ID (though since this makes updates hard, you may ironically wish to do something exactly like your scheme to make fingerprints, but mod max_int_size with detection-collision). To manage memory, you can pin extremely expensive set comparison (e.g. checking if a giant set is part of itself) into the cache, while otherwise using a most-recent policy if you run into memory issues. This nice thing about this is it synergizes with an element-by-element rejection test. That is, you will be throwing out sets quickly if they don't have many overlapping elements, but if they have many overlapping elements the calculations will take a long time, and if you repeat these calculations, caching could come in handy.
Let A and B be two sets, and you want to check if A is a subset of B. The first idea that pops into my mind is to sort both sets and then simply check if every element of A is contained in B, as following:
Let n_A and n_B be the cardinality of A and B, respectively. Let i_A = 1, i_B = 1. Then the following algorithm (that is O(n_A + n_B)) will solve the problem:
// A and B assumed to be sorted
i_A = 1;
i_B = 1;
n_A = size(A);
n_B = size(B);
while (i_A <= n_A) {
while (A[i_A] > B[i_B]) {
i_B++;
if (i_B > n_B) return false;
}
if (A[i_A] != B[i_B}) return false;
i_A++;
}
return true;
The same thing, but in a more functional, recursive way (some will find the previous easier to understand, others might find this one easier to understand):
// A and B assumed to be sorted
function subset(A, B)
n_A = size(A)
n_B = size(B)
function subset0(i_A, i_B)
if (i_A > n_A) true
else if (i_B > n_B) false
else
if (A[i_A] <= B[i_B]) return (A[i_A] == B[i_B]) && subset0(i_A + 1, i_B + 1);
else return subset0(i_A, i_B + 1);
subset0(1, 1)
In this last example, notice that subset0 is tail recursive, since if (A[i_A] == B[i_B]) is false then there will be no recursive call, otherwise, if (A[i_A] == B[i_B]) is true, than there's no need to keep this information, since the result of true && subset0(...) is exactly the same as subset0(...). So, any smart compiler will be able to transform this into a loop, avoiding stack overflows or any performance hits caused by function calls.
This will certainly work, but we might be able to optimize it a lot in the average case if you have and provide more information about your sets, such as the probability distribution of the values in the sets, if you somehow expect the answer to be biased (ie, it will more often be true, or more often be false), etc.
Also, have you already written any code to actually measure its performance? Or are you trying to pre-optimize?
You should start by writing the simplest and most straightforward solution that works, and measure its performance. If it's not already satisfactory, only then you should start trying to optimize it.
I'll present an O(m+n) time-per-test algorithm. But first, two notes regarding the problem statement:
Note 1 - Your edits say that set sizes may be a few thousand, and numbers may range up to a million or two.
In following, let m, n denote the sizes of sets A, B and let R denote the size of the largest numbers allowed in sets.
Note 2 - The multiplication method you proposed is quite inefficient. Although it uses O(m+n) multiplies, it is not an O(m+n) method because the product lengths are worse than O(m) and O(n), so it would take more than O(m^2 + n^2) time, which is worse than the O(m ln(m) + n ln(n)) time required for sorting-based methods, which in turn is worse than the O(m+n) time of the following method.
For the presentation below, I suppose that sets A, B can completely change between tests, which you say can occur several hundred times per second. If there are partial changes, and you know which p elements change in A from one test to next, and which q change in B, then the method can be revised to run in O(p+q) time per test.
Step 0. (Performed one time only, at outset.) Clear an array F, containing R bits or bytes, as you prefer.
Step 1. (Initial step of per-test code.) For i from 0 to n-1, set F[B[i]], where B[i] denotes the i'th element of set B. This is O(n).
Step 2. For i from 0 to m-1, { test F[A[i]]. If it is clear, report that A is not a subset of B, and go to step 4; else continue }. This is O(m).
Step 3. Report that A is a subset of B.
Step 4. (Clear used bits) For i from 0 to n-1, clear F[B[i]]. This is O(n).
The initial step (clearing array F) is O(R) but steps 1-4 amount to O(m+n) time.
Given the limit on the size of the integers, if the set of B sets is small and changes seldom, consider representing the B sets as bitsets (bit arrays indexed by integer set member). This doesn't require sorting, and the test for each element is very fast.
If the A members are sorted and tend to be clustered together, then get another speedup by testing all the element in one word of the bitset at a time.

Resources