Related
You are given N and an int K[].
The task at hand is to generate a equal probabilistic random number between 0 to N-1 which doesn't exist in K.
N is strictly a integer >= 0.
And K.length is < N-1. And 0 <= K[i] <= N-1. Also assume K is sorted and each element of K is unique.
You are given a function uniformRand(int M) which generates uniform random number in the range 0 to M-1 And assume this functions's complexity is O(1).
Example:
N = 7
K = {0, 1, 5}
the function should return any random number { 2, 3, 4, 6 } with equal
probability.
I could get a O(N) solution for this : First generate a random number between 0 to N - K.length. And map the thus generated random number to a number not in K. The second step will take the complexity to O(N). Can it be done better in may be O(log N) ?
You can use the fact that all the numbers in K[] are between 0 and N-1 and they are distinct.
For your example case, you generate a random number from 0 to 3. Say you get a random number r. Now you conduct binary search on the array K[].
Initialize i = K.length/2.
Find K[i] - i. This will give you the number of numbers missing from the array in the range 0 to i.
For example K[2] = 5. So 3 elements are missing from K[0] to K[2] (2,3,4)
Hence you can decide whether you have to conduct the remaining search in the first part of array K or the next part. This is because you know r.
This search will give you a complexity of log(K.length)
EDIT: For example,
N = 7
K = {0, 1, 4} // modified the array to clarify the algorithm steps.
the function should return any random number { 2, 3, 5, 6 } with equal probability.
Random number generated between 0 and N-K.length = random{0-3}. Say we get 3. Hence we require the 4th missing number in array K.
Conduct binary search on array K[].
Initial i = K.length/2 = 1.
Now we see K[1] - 1 = 0. Hence no number is missing upto i = 1. Hence we search on the latter part of the array.
Now i = 2. K[2] - 2 = 4 - 2 = 2. Hence there are 2 missing numbers up to index i = 2. But we need the 4th missing element. So we again have to search in the latter part of the array.
Now we reach an empty array. What should we do now? If we reach an empty array between say K[j] & K[j+1] then it simply means that all elements between K[j] and K[j+1] are missing from the array K.
Hence all elements above K[2] are missing from the array, namely 5 and 6. We need the 4th element out of which we have already discarded 2 elements. Hence we will choose the second element which is 6.
Binary search.
The basic algorithm:
(not quite the same as the other answer - the number is only generated at the end)
Start in the middle of K.
By looking at the current value and it's index, we can determine the number of pickable numbers (numbers not in K) to the left.
Similarly, by including N, we can determine the number of pickable numbers to the right.
Now randomly go either left or right, weighted based on the count of pickable numbers on each side.
Repeat in the chosen subarray until the subarray is empty.
Then generate a random number in the range consisting of the numbers before and after the subarray in the array.
The running time would be O(log |K|), and, since |K| < N-1, O(log N).
The exact mathematics for number counts and weights can be derived from the example below.
Extension with K containing a bigger range:
Now let's say (for enrichment purposes) K can also contain values N or larger.
Then, instead of starting with the entire K, we start with a subarray up to position min(N, |K|), and start in the middle of that.
It's easy to see that the N-th position in K (if one exists) will be >= N, so this chosen range includes any possible number we can generate.
From here, we need to do a binary search for N (which would give us a point where all values to the left are < N, even if N could not be found) (the above algorithm doesn't deal with K containing values greater than N).
Then we just run the algorithm as above with the subarray ending at the last value < N.
The running time would be O(log N), or, more specifically, O(log min(N, |K|)).
Example:
N = 10
K = {0, 1, 4, 5, 8}
So we start in the middle - 4.
Given that we're at index 2, we know there are 2 elements to the left, and the value is 4, so there are 4 - 2 = 2 pickable values to the left.
Similarly, there are 10 - (4+1) - 2 = 3 pickable values to the right.
So now we go left with probability 2/(2+3) and right with probability 3/(2+3).
Let's say we went right, and our next middle value is 5.
We are at the first position in this subarray, and the previous value is 4, so we have 5 - (4+1) = 0 pickable values to the left.
And there are 10 - (5+1) - 1 = 3 pickable values to the right.
We can't go left (0 probability). If we go right, our next middle value would be 8.
There would be 2 pickable values to the left, and 1 to the right.
If we go left, we'd have an empty subarray.
So then we'd generate a number between 5 and 8, which would be 6 or 7 with equal probability.
This can be solved by basically solving this:
Find the rth smallest number not in the given array, K, subject to
conditions in the question.
For that consider the implicit array D, defined by
D[i] = K[i] - i for 0 <= i < L, where L is length of K
We also set D[-1] = 0 and D[L] = N
We also define K[-1] = 0.
Note, we don't actually need to construct D. Also note that D is sorted (and all elements non-negative), as the numbers in K[] are unique and increasing.
Now we make the following claim:
CLAIM: To find the rth smallest number not in K[], we need to find right most occurrence of r' in D (which occurs at position defined by j), where r' is the largest number in D, which is < r. Such an r' exists, because D[-1] = 0. Once we find such an r' (and j), the number we are looking for is r-r' + K[j].
Proof: Basically the definition of r' and j tells us that there are exactlyr' numbers missing from 0 to K[j], and more than r numbers missing from 0 to K[j+1]. Thus all the numbers from K[j]+1 to K[j+1]-1 are missing (and these missing are at least r-r' in number), and the number we seek is among them, given by K[j] + r-r'.
Algorithm:
In order to find (r',j) all we need to do is a (modified) binary search for r in D, where we keep moving to the left even if we find r in the array.
This is an O(log K) algorithm.
If you are running this many times, it probably pays to speed up your generation operation: O(log N) time just isn't acceptable.
Make an empty array G. Starting at zero, count upwards while progressing through the values of K. If a value isn't in K add it to G. If it is in K don't add it and progress your K pointer. (This relies on K being sorted.)
Now you have an array G which has only acceptable numbers.
Use your random number generator to choose a value from G.
This requires O(N) preparatory work and each generation happens in O(1) time. After N look-ups the amortized time of all operations is O(1).
A Python mock-up:
import random
class PRNG:
def __init__(self, K,N):
self.G = []
kptr = 0
for i in range(N):
if kptr<len(K) and K[kptr]==i:
kptr+=1
else:
self.G.append(i)
def getRand(self):
rn = random.randint(0,len(self.G)-1)
return self.G[rn]
prng=PRNG( [0,1,5], 7)
for i in range(20):
print prng.getRand()
The problems is to find the count of numbers between A and B (inclusive) that have sum of digits equal to S.
Also print the smallest such number between A and B (inclusive).
Input:
Single line consisting of A,B,S.
Output:
Two lines.
In first line the number of integers between A and B having sum of digits equal to S.
In second line the smallest such number between A and B.
Constraints:
1 <= A <= B < 10^15
1 <= S <= 135
Source: Hacker Earth
My solution works for only 30 pc of their inputs. What could be the best possible solution to this?
The algorithm I am using now computes the sum of the smallest digit and then upon every change of the tens digit computes the sum again.
Below is the solution in Python:
def sum(n):
if (n<10):return n
return n%10 + sum(n/10)
stri = raw_input()
min = 99999
stri = stri.split(" ")
a= long (stri[0])
b= long (stri[1])
s= long (stri[2])
count= 0
su = sum(a)
while a<=b :
if (a % 10 == 0 ):
su = sum(a)
print a
if ( s == su):
count+=1
if (a<= min):
min=a
a+=1
su+=1
print count
print min
There are two separate problems here: finding the smallest number between those numbers that has the right digit sum and finding the number of values in the range with that digit sum. I'll talk about those problems separately.
Counting values between A and B with digit sum S.
The general approach for solving this problem will be the following:
Compute the number of values less than or equal to A - 1 with digit sum S.
Compute the number of values less than or equal to B with digit sum S.
Subtract the first number from the second.
To do this, we should be able to use a dynamic programming approach. We're going to try to answer queries of the following form:
How many D-digit numbers are there, whose first digit is k, whose digits that sum up to S?
We'll create a table N[D, k, S] to hold these values. We know that D is going to be at most 16 and that S is going to be at most 136, so this table will have only 10 × 16 × 136 = 21,760 entries, which isn't too bad. To fill it in, we can use the following base cases:
N[1, S, S] = 1 for 0 ≤ S ≤ 9, since there's only one one-digit number that sums up to any value less than ten.
N[1, k, S] = 0 for 0 ≤ S ≤ 9 if k ≠ S, since no one-digit number whose first digit isn't a particular sum sums up to some value.
N[1, k, S] = 0 for 10 ≤ S ≤ 135, since no one-digit number sums up to exactly S for any k greater than a single digit.
N[1, k, S] = 0 for any S < 0.
Then, we can use the following logic to fill in the other table entries:
N[D + 1, k, S] = sum(i from 0 to 9) N[D, i, S - k].
This says that the number of (D+1)-digit numbers whose first digit is k that sum up to S is given by the number of D-digit numbers that sum up to S - k. The number of D-digit numbers that sum up to S - k is given by the number of D-digit numbers that sum up to S - k whose first digit is 0, 1, 2, ..., 9, so we have to sum up over them.
Filling in this DP table takes time only O(1), and in fact you could conceivably precompute it and hardcode it into the program if you were really concerned about time.
So how can we use this table? Well, suppose we want to know how many numbers that sum up to S are less than or equal to some number X. To do this, we can process the digits of X one at a time. Let's write X one digit at a time as d1 ... dn. We can start off by looking at N[n, d1, S]. This gives us the number of n-digit numbers whose first digit is d1 that sum up to S. This may overestimate the number of values less than or equal to X that sum up to S. For example, if our number is 21,111 and we want the number of values that sum up to exactly 12, then looking up this table value will give us false positives for numbers like 29,100 that start with a 2 and are five digits long, but which are still greater than X. To handle this, we can move to the next digit of the number X. Since the first digit was a 2, the rest of the digits in the number must sum up to 10. Moreover, since the next digit of X (21,111) is a 1, we can now subtract from our total the number of 4-digit numbers starting with 2, 3, 4, 5, ..., 9 that add up to 10. We can then repeat this process one digit at a time.
More generally, our algorithm will be as follows. Let X be our number and S the target sum. Write X = d1d2...dn and compute the following:
# Begin by starting with all numbers whose first digit is less than d[1].
# Those count as well.
result = 0
for i from 0 to d[1]:
result += N[n, i, S]
# Now, exclude everything whose first digit is d[1] that is too large.
S -= d[1]
for i = 2 to n:
for j = d[i] to 8:
result -= N[n, d[i], S]
S -= d[i]
The value of result will then be the number of values less than or equal to X that sum up to exactly S. This algorithm will only run for at most 16 iterations, so it should be very quick. Moreover, using this algorithm and the earlier subtraction trick, we can use it to compute how many values between A and B sum up to exactly S.
Finding the smallest value in [A, B] with digit sum S.
We can use a similar trick with our DP table to find the smallest number greater than A number that sums up to exactly S. I'll leave the details as an exercise, but as a hint, work one digit at a time, trying to find the smallest number for which the DP table returns a nonzero value.
Hope this helps!
I found this interview question, and I couldn't come up with an algorithm better than O(N^2 * P):
Given a vector of P natural numbers (1,2,3,...,P) and another vector of length N whose elements are from the first vector, find the longest subsequence in the second vector, such that all elements are uniformly distributed (have the same frequency).
Example : (1,2,3) and (1,2,1,3,2,1,3,1,2,3,1). The longest subsequence is in the interval [2,10], because it contains all the elements from the first sequence with the same frequency (1 appears three times, 2 three times, and 3 three times).
The time complexity should be O(N * P).
"Subsequence" usually means noncontiguous. I'm going to assume that you meant "sublist".
Here's an O(N P) algorithm assuming we can hash (assumption not needed; we can radix sort instead). Scan the array keeping a running total for each number. For your example,
1 2 3
--------
0 0 0
1
1 0 0
2
1 1 0
1
2 1 0
3
2 1 1
2
2 2 1
1
3 2 1
3
3 2 2
1
4 2 2
2
4 3 2
3
4 3 3
1
5 3 3
Now, normalize each row by subtracting the minimum element. The result is
0: 000
1: 100
2: 110
3: 210
4: 100
5: 110
6: 210
7: 100
8: 200
9: 210
10: 100
11: 200.
Prepare two hashes, mapping each row to the first index at which it appears and the last index at which it appears. Iterate through the keys and take the one with maximum last - first.
000: first is at 0, last is at 0
100: first is at 1, last is at 10
110: first is at 2, last is at 5
210: first is at 3, last is at 9
200: first is at 8, last is at 11
The best key is 100, since its sublist has length 9. The sublist is the (1+1)th element to the 10th.
This works because a sublist is balanced if and only if its first and last unnormalized histograms are the same up to adding a constant, which occurs if and only if the first and last normalized histograms are identical.
If the memory usage is not important, it's easy...
You can give the matrix dimensions N*p and save in column (i) the value corresponding to how many elements p is looking between (i) first element in the second vector...
After completing the matrix, you can search for column i that all of the elements in column i are not different. The maximum i is the answer.
With randomization, you can get it down to linear time. The idea is to replace each of the P values with a random integer, such that those integers sum to zero. Now look for two prefix sums that are equal. This allows some small chance of false positives, which we could remedy by checking our output.
In Python 2.7:
# input:
vec1 = [1, 2, 3]
P = len(vec1)
vec2 = [1, 2, 1, 3, 2, 1, 3, 1, 2, 3, 1]
N = len(vec2)
# Choose big enough integer B. For each k in vec1, choose
# a random mod-B remainder r[k], so their mod-B sum is 0.
# Any P-1 of these remainders are independent.
import random
B = N*N*N
r = dict((k, random.randint(0,B-1)) for k in vec1)
s = sum(r.values())%B
r[vec1[0]] = (r[vec1[0]]+B-s)%B
assert sum(r.values())%B == 0
# For 0<=i<=N, let vec3[i] be mod-B sum of r[vec2[j]], for j<i.
vec3 = [0] * (N+1)
for i in range(1,N+1):
vec3[i] = (vec3[i-1] + r[vec2[i-1]]) % B
# Find pair (i,j) so vec3[i]==vec3[j], and j-i is as large as possible.
# This is either a solution (subsequence vec2[i:j] is uniform) or a false
# positive. The expected number of false positives is < N*N/(2*B) < 1/N.
(i, j)=(0, 0)
first = {}
for k in range(N+1):
v = vec3[k]
if v in first:
if k-first[v] > j-i:
(i, j) = (first[v], k)
else:
first[v] = k
# output:
print "Found subsequence from", i, "(inclusive) to", j, "(exclusive):"
print vec2[i:j]
print "This is either uniform, or rarely, it is a false positive."
Here is an observation: you can't get a uniformly distributed sequence that is not a multiplication of P in length. This implies that you only have to check the sub-sequences of N that are P, 2P, 3P... long - (N/P)^2 such sequences.
You can get this down to O(N) time, with no dependence on P by enhancing uty's solution.
For each row, instead of storing the normalized counts of each element, store a hash of the normalized counts while only keeping the normalized counts for the current index. During each iteration, you need to first update the normalized counts, which has an amortized cost of O(1) if each decrement of a count is paid for when it is incremented. Next you recompute the hash. The key here is that the hash needs to be easily updatable following an increment or decrement of one of the elements of the tuple that is being hashed.
At least one way of doing this hashing efficiently, with good theoretical independence guarantees is shown in the answer to this question. Note that the O(lg P) cost for computing the exponential to determine the amount to add to the hash can be eliminated by precomputing the exponentials modulo the prime in with a total running time of O(P) for the precomputation, giving a total running time of O(N + P) = O(N).
My knowledge of big-O is limited, and when log terms show up in the equation it throws me off even more.
Can someone maybe explain to me in simple terms what a O(log n) algorithm is? Where does the logarithm come from?
This specifically came up when I was trying to solve this midterm practice question:
Let X(1..n) and Y(1..n) contain two lists of integers, each sorted in nondecreasing order. Give an O(log n)-time algorithm to find the median (or the nth smallest integer) of all 2n combined elements. For ex, X = (4, 5, 7, 8, 9) and Y = (3, 5, 8, 9, 10), then 7 is the median of the combined list (3, 4, 5, 5, 7, 8, 8, 9, 9, 10). [Hint: use concepts of binary search]
I have to agree that it's pretty weird the first time you see an O(log n) algorithm... where on earth does that logarithm come from? However, it turns out that there's several different ways that you can get a log term to show up in big-O notation. Here are a few:
Repeatedly dividing by a constant
Take any number n; say, 16. How many times can you divide n by two before you get a number less than or equal to one? For 16, we have that
16 / 2 = 8
8 / 2 = 4
4 / 2 = 2
2 / 2 = 1
Notice that this ends up taking four steps to complete. Interestingly, we also have that log2 16 = 4. Hmmm... what about 128?
128 / 2 = 64
64 / 2 = 32
32 / 2 = 16
16 / 2 = 8
8 / 2 = 4
4 / 2 = 2
2 / 2 = 1
This took seven steps, and log2 128 = 7. Is this a coincidence? Nope! There's a good reason for this. Suppose that we divide a number n by 2 i times. Then we get the number n / 2i. If we want to solve for the value of i where this value is at most 1, we get
n / 2i ≤ 1
n ≤ 2i
log2 n ≤ i
In other words, if we pick an integer i such that i ≥ log2 n, then after dividing n in half i times we'll have a value that is at most 1. The smallest i for which this is guaranteed is roughly log2 n, so if we have an algorithm that divides by 2 until the number gets sufficiently small, then we can say that it terminates in O(log n) steps.
An important detail is that it doesn't matter what constant you're dividing n by (as long as it's greater than one); if you divide by the constant k, it will take logk n steps to reach 1. Thus any algorithm that repeatedly divides the input size by some fraction will need O(log n) iterations to terminate. Those iterations might take a lot of time and so the net runtime needn't be O(log n), but the number of steps will be logarithmic.
So where does this come up? One classic example is binary search, a fast algorithm for searching a sorted array for a value. The algorithm works like this:
If the array is empty, return that the element isn't present in the array.
Otherwise:
Look at the middle element of the array.
If it's equal to the element we're looking for, return success.
If it's greater than the element we're looking for:
Throw away the second half of the array.
Repeat
If it's less than the the element we're looking for:
Throw away the first half of the array.
Repeat
For example, to search for 5 in the array
1 3 5 7 9 11 13
We'd first look at the middle element:
1 3 5 7 9 11 13
^
Since 7 > 5, and since the array is sorted, we know for a fact that the number 5 can't be in the back half of the array, so we can just discard it. This leaves
1 3 5
So now we look at the middle element here:
1 3 5
^
Since 3 < 5, we know that 5 can't appear in the first half of the array, so we can throw the first half array to leave
5
Again we look at the middle of this array:
5
^
Since this is exactly the number we're looking for, we can report that 5 is indeed in the array.
So how efficient is this? Well, on each iteration we're throwing away at least half of the remaining array elements. The algorithm stops as soon as the array is empty or we find the value we want. In the worst case, the element isn't there, so we keep halving the size of the array until we run out of elements. How long does this take? Well, since we keep cutting the array in half over and over again, we will be done in at most O(log n) iterations, since we can't cut the array in half more than O(log n) times before we run out of array elements.
Algorithms following the general technique of divide-and-conquer (cutting the problem into pieces, solving those pieces, then putting the problem back together) tend to have logarithmic terms in them for this same reason - you can't keep cutting some object in half more than O(log n) times. You might want to look at merge sort as a great example of this.
Processing values one digit at a time
How many digits are in the base-10 number n? Well, if there are k digits in the number, then we'd have that the biggest digit is some multiple of 10k. The largest k-digit number is 999...9, k times, and this is equal to 10k + 1 - 1. Consequently, if we know that n has k digits in it, then we know that the value of n is at most 10k + 1 - 1. If we want to solve for k in terms of n, we get
n ≤ 10k+1 - 1
n + 1 ≤ 10k+1
log10 (n + 1) ≤ k + 1
(log10 (n + 1)) - 1 ≤ k
From which we get that k is approximately the base-10 logarithm of n. In other words, the number of digits in n is O(log n).
For example, let's think about the complexity of adding two large numbers that are too big to fit into a machine word. Suppose that we have those numbers represented in base 10, and we'll call the numbers m and n. One way to add them is through the grade-school method - write the numbers out one digit at a time, then work from the right to the left. For example, to add 1337 and 2065, we'd start by writing the numbers out as
1 3 3 7
+ 2 0 6 5
==============
We add the last digit and carry the 1:
1
1 3 3 7
+ 2 0 6 5
==============
2
Then we add the second-to-last ("penultimate") digit and carry the 1:
1 1
1 3 3 7
+ 2 0 6 5
==============
0 2
Next, we add the third-to-last ("antepenultimate") digit:
1 1
1 3 3 7
+ 2 0 6 5
==============
4 0 2
Finally, we add the fourth-to-last ("preantepenultimate"... I love English) digit:
1 1
1 3 3 7
+ 2 0 6 5
==============
3 4 0 2
Now, how much work did we do? We do a total of O(1) work per digit (that is, a constant amount of work), and there are O(max{log n, log m}) total digits that need to be processed. This gives a total of O(max{log n, log m}) complexity, because we need to visit each digit in the two numbers.
Many algorithms get an O(log n) term in them from working one digit at a time in some base. A classic example is radix sort, which sorts integers one digit at a time. There are many flavors of radix sort, but they usually run in time O(n log U), where U is the largest possible integer that's being sorted. The reason for this is that each pass of the sort takes O(n) time, and there are a total of O(log U) iterations required to process each of the O(log U) digits of the largest number being sorted. Many advanced algorithms, such as Gabow's shortest-paths algorithm or the scaling version of the Ford-Fulkerson max-flow algorithm, have a log term in their complexity because they work one digit at a time.
As to your second question about how you solve that problem, you may want to look at this related question which explores a more advanced application. Given the general structure of problems that are described here, you now can have a better sense of how to think about problems when you know there's a log term in the result, so I would advise against looking at the answer until you've given it some thought.
When we talk about big-Oh descriptions, we are usually talking about the time it takes to solve problems of a given size. And usually, for simple problems, that size is just characterized by the number of input elements, and that's usually called n, or N. (Obviously that's not always true-- problems with graphs are often characterized in numbers of vertices, V, and number of edges, E; but for now, we'll talk about lists of objects, with N objects in the lists.)
We say that a problem "is big-Oh of (some function of N)" if and only if:
For all N > some arbitrary N_0, there is some constant c, such that the runtime of the algorithm is less than that constant c times (some function of N.)
In other words, don't think about small problems where the "constant overhead" of setting up the problem matters, think about big problems. And when thinking about big problems, big-Oh of (some function of N) means that the run-time is still always less than some constant times that function. Always.
In short, that function is an upper bound, up to a constant factor.
So, "big-Oh of log(n)" means the same thing that I said above, except "some function of N" is replaced with "log(n)."
So, your problem tells you to think about binary search, so let's think about that. Let's assume you have, say, a list of N elements that are sorted in increasing order. You want to find out if some given number exists in that list. One way to do that which is not a binary search is to just scan each element of the list and see if it's your target number. You might get lucky and find it on the first try. But in the worst case, you'll check N different times. This is not binary search, and it is not big-Oh of log(N) because there's no way to force it into the criteria we sketched out above.
You can pick that arbitrary constant to be c=10, and if your list has N=32 elements, you're fine: 10*log(32) = 50, which is greater than the runtime of 32. But if N=64, 10*log(64) = 60, which is less than the runtime of 64. You can pick c=100, or 1000, or a gazillion, and you'll still be able to find some N that violates that requirement. In other words, there is no N_0.
If we do a binary search, though, we pick the middle element, and make a comparison. Then we throw out half the numbers, and do it again, and again, and so on. If your N=32, you can only do that about 5 times, which is log(32). If your N=64, you can only do this about 6 times, etc. Now you can pick that arbitrary constant c, in such a way that the requirement is always met for large values of N.
With all that background, what O(log(N)) usually means is that you have some way to do a simple thing, which cuts your problem size in half. Just like the binary search is doing above. Once you cut the problem in half, you can cut it in half again, and again, and again. But, critically, what you can't do is some preprocessing step that would take longer than that O(log(N)) time. So for instance, you can't shuffle your two lists into one big list, unless you can find a way to do that in O(log(N)) time, too.
(NOTE: Nearly always, Log(N) means log-base-two, which is what I assume above.)
In the following solution, all the lines with a recursive call are done on
half of the given sizes of the sub-arrays of X and Y.
Other lines are done in a constant time.
The recursive function is T(2n)=T(2n/2)+c=T(n)+c=O(lg(2n))=O(lgn).
You start with MEDIAN(X, 1, n, Y, 1, n).
MEDIAN(X, p, r, Y, i, k)
if X[r]<Y[i]
return X[r]
if Y[k]<X[p]
return Y[k]
q=floor((p+r)/2)
j=floor((i+k)/2)
if r-p+1 is even
if X[q+1]>Y[j] and Y[j+1]>X[q]
if X[q]>Y[j]
return X[q]
else
return Y[j]
if X[q+1]<Y[j-1]
return MEDIAN(X, q+1, r, Y, i, j)
else
return MEDIAN(X, p, q, Y, j+1, k)
else
if X[q]>Y[j] and Y[j+1]>X[q-1]
return Y[j]
if Y[j]>X[q] and X[q+1]>Y[j-1]
return X[q]
if X[q+1]<Y[j-1]
return MEDIAN(X, q, r, Y, i, j)
else
return MEDIAN(X, p, q, Y, j, k)
The Log term pops up very often in algorithm complexity analysis. Here are some explanations:
1. How do you represent a number?
Lets take the number X = 245436. This notation of “245436” has implicit information in it. Making that information explicit:
X = 2 * 10 ^ 5 + 4 * 10 ^ 4 + 5 * 10 ^ 3 + 4 * 10 ^ 2 + 3 * 10 ^ 1 + 6 * 10 ^ 0
Which is the decimal expansion of the number. So, the minimum amount of information we need to represent this number is 6 digits. This is no coincidence, as any number less than 10^d can be represented in d digits.
So how many digits are required to represent X? Thats equal to the largest exponent of 10 in X plus 1.
==> 10 ^ d > X
==> log (10 ^ d) > log(X)
==> d* log(10) > log(X)
==> d > log(X) // And log appears again...
==> d = floor(log(x)) + 1
Also note that this is the most concise way to denote the number in this range. Any reduction will lead to information loss, as a missing digit can be mapped to 10 other numbers. For example: 12* can be mapped to 120, 121, 122, …, 129.
2. How do you search for a number in (0, N - 1)?
Taking N = 10^d, we use our most important observation:
The minimum amount of information to uniquely identify a value in a range between 0 to N - 1 = log(N) digits.
This implies that, when asked to search for a number on the integer line, ranging from 0 to N - 1, we need at least log(N) tries to find it. Why? Any search algorithm will need to choose one digit after another in its search for the number.
The minimum number of digits it needs to choose is log(N). Hence the minimum number of operations taken to search for a number in a space of size N is log(N).
Can you guess the order complexities of binary search, ternary search or deca search? Its O(log(N))!
3. How do you sort a set of numbers?
When asked to sort a set of numbers A into an array B, here’s what it looks like ->
Permute Elements
Every element in the original array has to be mapped to it’s corresponding index in the sorted array. So, for the first element, we have n positions. To correctly find the corresponding index in this range from 0 to n - 1, we need…log(n) operations.
The next element needs log(n-1) operations, the next log(n-2) and so on. The total comes to be:
==> log(n) + log(n - 1) + log(n - 2) + … + log(1)Using log(a) + log(b) = log(a * b), ==> log(n!)
This can be approximated to nlog(n) - n. Which is O(n*log(n))!
Hence we conclude that there can be no sorting algorithm that can do better than O(n*log(n)). And some algorithms having this complexity are the popular Merge Sort and Heap Sort!
These are some of the reasons why we see log(n) pop up so often in the complexity analysis of algorithms. The same can be extended to binary numbers. I made a video on that here.
Why does log(n) appear so often during algorithm complexity analysis?
Cheers!
We call the time complexity O(log n), when the solution is based on iterations over n, where the work done in each iteration is a fraction of the previous iteration, as the algorithm works towards the solution.
Can't comment yet... necro it is!
Avi Cohen's answer is incorrect, try:
X = 1 3 4 5 8
Y = 2 5 6 7 9
None of the conditions are true, so MEDIAN(X, p, q, Y, j, k) will cut both the fives. These are nondecreasing sequences, not all values are distinct.
Also try this even-length example with distinct values:
X = 1 3 4 7
Y = 2 5 6 8
Now MEDIAN(X, p, q, Y, j+1, k) will cut the four.
Instead I offer this algorithm, call it with MEDIAN(1,n,1,n):
MEDIAN(startx, endx, starty, endy){
if (startx == endx)
return min(X[startx], y[starty])
odd = (startx + endx) % 2 //0 if even, 1 if odd
m = (startx+endx - odd)/2
n = (starty+endy - odd)/2
x = X[m]
y = Y[n]
if x == y
//then there are n-2{+1} total elements smaller than or equal to both x and y
//so this value is the nth smallest
//we have found the median.
return x
if (x < y)
//if we remove some numbers smaller then the median,
//and remove the same amount of numbers bigger than the median,
//the median will not change
//we know the elements before x are smaller than the median,
//and the elements after y are bigger than the median,
//so we discard these and continue the search:
return MEDIAN(m, endx, starty, n + 1 - odd)
else (x > y)
return MEDIAN(startx, m + 1 - odd, n, endy)
}
I'm struggling to get some solution, but I have no idea for that.
RobotA and RobotB who choose a permutation of the N numbers to begin with. RobotA picks first, and they pick alternately. Each turn, robots only can pick any one remaining number from the permutation. When the remaining numbers form an increasing sequence, the game finishes. The robot who picked the last turn (after which the sequence becomes increasing) wins the game.
Assuming both play optimally , who wins?
Example 1:
The original sequence is 1 7 3.
RobotA wins by picking 7, after which the sequence is increasing 1 3.
Example 2:
The original sequence is 8 5 3 1 2.
RobotB wins by selecting the 2, preventing any increasing sequence.
Is there any known algorithm to solve that? Please give me any tips or ideas of where to look at would be really thankful!
Given a sequence w of distinct numbers, let N(w) be the length of w and let L(w) be the length of the longest increasing subsequence in w. For example, if
w = 3 5 8 1 4
then N(w) = 5 and L(w) = 3.
The game ends when L(w) = N(w), or, equivalently, N(w) - L(w) = 0.
Working the game backwards, if on RobotX's turn N(w) - L(w) = 1, then the optimal play is to remove the unique letter not in a longest increasing subsequence, thereby winning the game.
For example, if w = 1 7 3, then N(w) = 3 and L(w) = 2 with a longest increasing subsequence being 1 3. Removing the 7 results in an increasing sequence, ensuring that the player who removed the 7 wins.
Going back to the previous example, w = 3 5 8 1 4, if either 1 or 4 is removed, then for the resulting permutation u we have N(u) - L(u) = 1, so the player who removed the 1 or 4 will certainly lose to a competent opponent. However, any other play results in a victory since it forces the next player to move to a losing position. Here, the optimal play is to remove any of 3, 5, or 8, after which N(u) - L(u) = 2, but after the next move N(v) - L(v) = 1.
Further analysis along these lines should lead to an optimal strategy for either player.
The nearest mathematical game that I do know is the Monotone Sequence Game. In a monotonic sequence game, two players alternately choose elements of a sequence from some fixed ordered set (e.g. 1,2,...,N). The game ends when the resulting sequence contains either an ascending subsequence of length A or a descending one of length D. This game has its origins with a theorem of Erdos and Szekeres, and a nice exposition can be found in MONOTONIC SEQUENCE GAMES, and this slide presentation by Bruce Sagan is also a good reference.
If you want to know more about game theory in general, or these sorts of games in particular, then I strong recommend Winning Ways for Your Mathematical Plays by Berlekamp, Conway and Guy. Volume 3, I believe, addresses these sorts of games.
Looks like a Minimax problem.
I guess there is more fast solution for this task. I will think.
But I can give you an idea of solution with O(N! * N^2) complexity.
At first, note that picking number from N-permutation is equivalent to the following:
Pick number from N-permutation. Let's it was number X.
Reassign numbers using rule:
1 = 1
2 = 2
...
X-1 = X-1
X = Nothing, it's gone.
X+1 = X
...
N = N - 1
And you get permutation of N-1 numbers.
Example:
1 5 6 4 2 3
Pick 2
1 5 6 4 3
Reassign
1 4 5 3 2
Let's use this one as move, instead just picking. It's easy too see that games are equivalent, player A wins in this game for some permutation if and only if it wins in original.
Let's give a code to all permutations of N numbers, N-1 numbers, ... 2 numbers.
Define F(x) -> {0; 1} (where x is a permutation code) is function which is 1 when current
player wins, and 0 if current player fails. Easy to see F(1 2 .. K-1 K) = 0.
F(x) = 1 if there is at least on move which transforms x to y, and F(y) = 0.
F(x) = 0 if for any move which transforms x to y, F(y) = 1.
So you can use recursion with memorization to compute:
Boolean F(X)
{
Let K be length of permutation with code X.
if you already compute F for argument X return previously calculated result;
if X == 1 2 .. K return 0;
Boolean result = 0;
for i = 1 to K do
{
Y code of permutation get from X by picking number on position i.
if (F(y) == 0)
{
result = 1;
break;
}
}
Store result as F(X);
return result;
}
For each argument we will compute this function only once. There is 1! permutations of length 1, 2! permutations of length 2 .. N! permutations of length N. For permutation length K, we need to do O(K) operation to compute. So complexity will be O(1*1! + 2*2! + .. N*N!) =<= O(N! * N^2) = O(N! * N^2)
Here is Python code for Wisdom's Wind's algorithm. It prints out wins for RobotA.
import itertools
def moves(p):
if tuple(sorted(p)) == p:
return
for i in p:
yield tuple(j - (j > i) for j in p if j != i)
winning = set()
for n in range(6):
for p in itertools.permutations(range(n)):
if not winning.issuperset(moves(p)):
winning.add(p)
for p in sorted(winning, key=lambda q: (len(q), q)):
print(p)