Suppose you are given two sets A and B, each containing n positive integers. You can choose to reorder each set however you like. After reordering let ai be the i-th element of set A, and let bi be the i-th element of set B. You then receive a payoff of (a1 ^ b1) × (a2 ^ b2) × ... × (an ^ bn). Give a polynomial algorithm that will maximize the payoff.
Answer:
I think if we sort both sets increasingly, the problem will solve. It needs o(nlog(n)) time.
Does anyone have a counterexample?
It's straightforward to prove that the greedy algorithm - i.e. pair the largest number with the largest exponent, then the next-largest number with the next-largest exponent, and so on - is optimal.
Suppose the product includes two terms a[i] ** b[i] and a[j] ** b[j], where a[i] < a[j] and b[i] > b[j]. It follows that (a[i] ** b[j]) * (a[j] ** b[i]) is greater than (a[i] ** b[i]) * (a[j] ** b[j]) because they differ by a factor of (a[j] / a[i]) ** (b[i] - b[j]), which by assumption is a number greater than 1 to the power of a number greater than 0, so this factor is greater than 1. Therefore, we can improve the payoff by swapping b[i] with b[j], and hence the original payout was not optimal because it could be improved upon by making the swap.
It follows that your greedy algorithm is correct in the sense that it does indeed maximise the payoff.
As a hint, we can reduce the problem to another simple version. If you log from the formula we will have:
b1 log(a1) + ... + bn log(an)
Now suppose we have two vectors V1 = <b1, b2, ..., bn> and V2 = <a1, a2, ..., an>. Hence, the formula is the dot product of these two vectors (V1.V2). By the definition of the dot product we know that V1.V2 = |V1|*|V2|* cos(theta). By permutating the elements of V1 and V2 we know the norm of V1 and V2 will not change. Hence, we need to maximize cos(theta) (that theta is the degree between V1 and V2). So, we need to minimize theta (to maximize cos(theta)).
Related
I have two arrays of integers A and B both of size n. The cost of a pair is |A(i) - B(i)|.
I want to pair the n elements of A and B such that the sum of all costs across all A(i)s and B(i)s are minimized.
I understand that I can get O(n log n) by sorting A, then sorting B, and then pairing them together from 1...n respectively, but after attempting for hours and hours, I can't figure out how to prove it. Can somebody help me out?
I've seen how to implement it, I just don't get how to prove it
I am following a slightly different approach here to prove this fact by making use of squares rather than absolute.
Consider 2 arrays, A = [a1, a2, ..., an] and B = [b1, b2, ..., bn].
Now, even if I use random pairing (form a pair using any index from A and B ),
Let's say, the sum of squares of difference (S) = a1^2 + b1^2 + a2^2 + b2^2 + ... + an^2 + bn^2 - 2 * (a1 * b3 + a2 * b4 + .... + an * b56 + bn * a34).
The above sum can be represented as S = sum(ai^2) + sum(bi^2) - 2 * sum(ai*bi), for i goes from 1 to n.
To minimise this sum, we need to maximise the part sum(ai*bi), for i goes from 1 to n.
The term sum(ai*bi) will be maximum when the 2 arrays will be sorted.
Thanks for pointing out #Abhinav Mathur: The statement The term sum(ai*bi) will be maximum when the 2 arrays will be sorted can be proved using rearrangement inequality.
Assume that according to the current sorted arrays, there is a pair |x-a|, and another pair |y-b|. Let's say that switching the elements would give a lesser sum i.e. a more optimal solution.
(Note: while switching around two pairs, the rest of array remains unaffected).
Current total sum of pairs = |x-a| + |y-b|
Modified sum after switching pairs = |x-b| + |y-a|
Difference in sums = diff = |x-b| - |x-a| + |y-a| + |y-b|
If diff is negative, it means we have found a better ordering. If not, it means our original solution was better.
Now, you can take cases and analyse this. (Since the arrays are sorted, let x<y (they're from the first array) and a<b (they're from second array).
Case 1: x>b or y<a:
In this case, both sums will be equal, which can be easily seen by expanding the modulus
Case 2: a<x<b:
If y>b, diff = 2*(b-x). Since we assumed b>x, diff is positive.
If y<b, diff = 2*(y-x). Since y>x as stated earlier, diff is again positive.
You can continue taking similar cases and prove that diff will always be positive, meaning that our original ordering will be the most efficient one.
Sorting and pairing creates a matching that we might call "monotonic", which ensures that if A[i] matches B[x] and A[j] matches B[y], then:
If A[i] < A[j] then B[x] <= B[y]; and
If B[x] < B[y] then A[i] <= A[j]
If you choose a matching that is not monotonic, then one of these rules will be violated for some pair of matchings.
If we pick any two elements from both arrays such that A[i] <= A[j] and B[x] <= B[y], then we can evaluate the cost of the monotonic pairing and the other pairing. Note that if A[j] = A[j] or B[i] = B[j] then both pairings have the same cost so it doesn't matter which one we call monotonic.
In order to compare the costs, we need to get rid of the absolute value operations. We can do that by separately considering all the possible orderings between the 4 values:
Case: A[i] <= A[j] <= B[x] <= B[y]:
Monotonic cost: B[x]-A[i] + B[y]-A[j]
Swapped cost: B[y]-A[i] + B[x]-A[j]
Difference: 0
cost is the same - doesn't matter which we choose
Case: A[i] <= B[x] <= A[j] <= B[y]
Monotonic cost: B[x]-A[i] + B[i]-A[j]
other cost: B[y]-A[i] + A[j]-B[x]
Difference: 2A[j] - 2B[x]
since A[j] >= B[x], monotonic is as good or better
... etc
If you go through all 6 possible orderings, in every case you find that the monotonic matching is as good or better. Given any matching, you can make every pair of element matchings monotonic, and the cost can only go down.
If you start with an optimal matching and make every pair of matchings monotonic then you end up with an optimal monotonic matching. (In fact the one you start with has to be monotonic if it's optimal, but we don't have to prove that) Since every monotonic matching has the same cost, and at least one of them is optimal, they must all be optimal.
Given an array arr of n integers, what is the highest score that a player can reach, playing the following game?
Choose an index 0 < i < n-1 in the array
Add arr[i-1] * arr[i+1] points to the score (initially the score is 0)
Shrink the array by removing element i (forall j >= i: arr[j] = arr[j+1]; then n = n - 1
Repeat steps 1-3 until n == 2.
Do the above until there are only 2 elements (which are the first and the last element because you can't remove them).
What is the highest score you can get ?
Example
arr = [1 2 3 4]
Choose i=2, get: 2*4 = 8 points, remove 3
Remaining: arr = [1 2 4]
Choose i=1, get 1*4 = 4 points, remove 2
Remaining: arr = [1 4].
The sum of points is 8 + 4 = 12, which is the highest possible score on this example.
I think it is related to Dynamic programming but I'm not sure how to solve it.
This problem has a dynamic programming approach similar to Matrix-chain multiplication problem. You can find further explanation in the book "Introduction to Algorithms", 3rd Edition (Cormen, page 370).
Let's find the optimal substructure property and then use it to construct an optimal solution to the problem from optimal solutions to subproblems.
Notation: Ci..j, where i ≤ j, stands for elements Ci,Ci+1,...,Cj.
Definition: A removal sequence for Ci..j is a permutation of i+1,i+2,...,j-1.
A removal sequence for Ci..j is optimal if the score achieved by removing the elements of Ci..j in that order is maximum among all possible removal sequences for Ci..j.
1. Characterize the structure of an optimal solution
If the problem is nontrivial, i.e. i + 1 < j, then any solution has a last removed element which corresponding index is k in the range
i < k < j. Such k split the problem into Ci..k and Ck..j. That is, for some value k, we first remove non extremal elements of Ci..k and Ck..j and then we remove element k. As removing non extremal elements of Ci..k doesn't affect score obtained by removing non extremal elements of Ck..j and an analogous reasoning for removing non extremal elements of Ck..j is also true we state that both subproblems are independent. Then, for a given removal sequence where kth-element is last, the score of Ci..j is equal to the sum of scores of Ci..k and Ck..j, plus the score of removing kth-element (C[i] * C[j]).
The optimal substructure of this problem is as follows. Suppose there is an optimal removal sequence O for Ci..j that ends at kth-element, then the ordering of removed elements from Ci..k must be optimal too. We can prove it by contradiction: If there was a removal sequence for Ci..k that scored higher than removal subsequence extracted from O for Ci..k then we can produce another removal sequence for Ci..j with higher score than optimal removal sequence (contradiction). A similar observation holds for the ordering of removed elements from Ck..j in the optimal removal sequence for Ci..j: it must be optimal too.
We can build an optimal solution for nontrivial instances of the problem by splitting the problem into two subproblems, finding optimal solutions to subproblem instances, and them combining these optimal subproblem solutions.
2. Recursively define the value of an optimal solution.
For this problem our subproblems are the maximum score obtained in Ci..j for 1 ≤ i ≤ j ≤ N. Let S[i, j] be the maximum score obtained in Ci..j; for the full problem, the highest score when evaluating the given rules is S[1, N].
We can define S[i, j] recursively as follows:
If j ≤ i + 1 then S[i, j] = 0
If i + 1 < j then S[i, j] = maxi < k < j{S[i, k] + S[k, j] + C[i] * C[j]}
We ensure that we search for the correct place to split because we consider all possible places, so that we are sure of having examined the optimal one.
3. Compute the value of an optimal solution
You can use your favorite method to compute S:
top-down approach (recursive)
bottom-up approach (iterative)\
I would use bottom-up for computing the solution since it would be < 5 lines long in almost any programming language.
Example in C++11:
for(int l = 2; l <= N; ++l) \\ increasing length intervals
for(int i = 1, j = i + l; j <= N; ++i, ++j)
for(int k = i + 1; k < j; ++k)
S[i, j] = max(S[i, j], S[i, k] + S[k, j] + C[i] * C[j])
4. Time Complexity and Space Complexity
There are nC2 + n = Θ(n2) subproblems and every subproblem do an operation which running time is Θ(l) where l is length of the subproblem so the math yield a running time of Θ(n3) for the algorithm (it's easy to spot the O(n3) part :-)). Also, the algorithm requires Θ(n2) space to store the S table.
I am giving a Array A consists of integer, I have to choose any two integer from array A and any third integer from range [L,R] , such that all three integer forms a valid triangle.
I have to find the number of integer in range [L,R] which can be used to form a valid triangle, by choosing any two values from array A.
I know if i know two sides then third side must be range a-b<x<a+b
where a and b are any two integer from A.
How to find the number of valid integer in [L,R] in O(N) N= Size of A time.
L and R and be very large upto 10^20
To my understanding, complete enumeration will yield an efficient solution by the following argument. The maximum number of possible solutions occurs in the case where any selection of elements of A yields a consistent choice of side lengths for a triangle. Such an input, for instance, would be an input of N times the value 1. However, the number of triples chosen from A can be bounded by
(n choose 3) = n!/(3!(n-3))
= n!/(6(n-3)!)
= (1/6)*(n-2)*(n-1)*n
<= n^3
(where choose is meant to denote the binomial coefficient) which is a polynomial number of choices. Any choice can be checked for validity in constant time, as only 3 values are involved.
Now the contest has ended, so here is my way to solve it in the contest.
The problem is asking how many number x in [L,R]can form triangle with any pair (a_i, a_j) in A.
Naive method is to brute all pairs which is O(N^2 * (R-L+1)) = O(N^2 * (R-L))
But indeed, we do not need to test all O(N^2) pairs, we only need to test O(N) pairs IF A is sorted, namely, all adjacent pair (a_i-1, a_i) for i > 0
Why? Because in sorted A:
If there is some pair (a_j, a_i) where a_j < a_i and j != i-1, which can form triangle with x
Then (a_i-1, a_i) must form a triangle with x too:
a_i-1 + a_i > a_j + a_i > x
x + a_i-1 > x + a_j > a_i
x + a_i > x + a_i-1 > a_j
Therefore checking all (a_i-1, a_i) is sufficient, this is the first core idea to solve this problem.
So now we have a O(NlgN + N*(R-L+1)) = O(N*(R-L)) algorithm.
For the original contest problem, it is still too slow as R-L+1 is as large as 10^18, so we need another tricks.
Note that actually by the triangle inequality, for a pair (a_i-1, a_i), indeed we can find a range of x which can form triangle with this pair:
a_i-1 + a_i > x > a_i - a_i-1 (By above 1 & 2)
For example, (4,5) can form triangle with all 1 < x < 9
So instead of iterating all x in [L,R], we try to find range of x for each pair, which can be done in O(N) as we know the range for x of a pair in O(1). Beware that x must fall in the range of [L,R].
After that we have O(N) ranges / segments of x which then we can union them and the size of the result set is the desired answer.
Union O(N) segments can be done easily in O(N) as well, I leave you as a homework :)
Combine both tricks, the algorithm is O(N lg N + N + N) = O(N lg N)
I have an array of elements [(A1, B1), ..., (An, Bn)] (all are positive floats and Bi <= 1) and I need to find such permutation which mimimizes the sum A1 + B1 * A2 + B1 * B2 * A3 + ... + B1 * ... B(n-1) * An.
Definitely I can just try all of them and select the one which gives the smallest sum (this will give correct result in O(n!)).
I tried to change the sum to A1 + B1 * (A2 + B2 * (A3 + B3 * (... + B(n-1) * An)) and tried to use a greedy algorithm which grabs the biggest Ai element on each of the steps (this does not yield a correct result).
Now when I look at the latest equation, it looks to me that here I see optimal substructure A(n - 1) + B(n - 1) * An and therefore I have to use dynamic programming, but I can not figure out correct direction. Any thoughts?
I think this can be solved in O(N log(N)).
Any permutation can be obtained by swapping pairs of adjacent elements; this is why bubble sort works, for example. So let's take a look at the effect of swapping entries (A[i], B[i]) and (A[i+1], B[i+1]). We want to find out in which cases it's a good idea to make this swap. This has effect only on the ith and i+1th terms, all others stay the same. Also, both before and after the swap, both terms have a factor B[1]*B[2]*...*B[i-1], which we can call C for now. C is a positive number.
Before the swap, the two terms we're dealing with are C*A[i] + C*B[i]*A[i+1], and afterwards they are C*A[i+1] + C*B[i+1]*A[i]. This is an improvement if the difference between the two is positive:
C*(A[i] + B[i]*A[i+1] - A[i+1] - B[i+1]*A[i]) > 0
Since C is positive, we can ignore that factor and look just at the As and Bs. We get
A[i] - B[i+1]*A[i] > A[i+1] - B[i]*A[i+1]
or equivalently
(1 - B[i+1])*A[i] > (1 - B[i])*A[i+1]
Both of these expressions are nonnegative; if one of B[i] or B[i+1] is one, then the term containing 'one minus that variable' is zero (so we should swap if B[i] is one but not if B[i+1] is one); if both variables are one, then both terms are zero. Let's assume for now that neither is equal to one; then we can rewrite further to obtain
A[i]/(1 - B[i]) > A[i+1]/(1 - B[i+1])
So we should compute this expression D[i] := A[i]/(1 - B[i]) for both terms and swap them if the left one is greater than the right one. We can extend this to the case where one or both Bs are one by defining D[i] to be infinitely big in that case.
OK, let's recap - what have we found? If there is a pair i, i+1 where D[i] > D[i+1], we should swap those two entries. That means that the only case where we cannot improve the result by swapping, is when we have reordered the pairs so that the D[i] values are in increasing order -- that is, all the cases with B[i] = 1 come last (recall that that corresponds to D[i] being infinitely large) and otherwise in increasing order of D[i] value. We can achieve that by sorting with respect to the D[i] value. A quick examination of our steps above shows that the order of pairs with equal D[i] value does not impact the final value.
Computing all D[i] values can be done in a single, linear-time pass. Sorting can be done with an O(N log(N)) algorithm (we needed the swapping-of-neighbouring-elements stuff only as an argument/proof to show that this is the optimal solution, not as part of the implementation).
Problem statement:
There are 3 arrays A,B,C all filled with positive integers, and all the three arrays are of the same size.
Find min(|a-b|+|b-c|+|c-a|) where a is in A, b is in B, c is in C.
I worked on the problem the whole weekend. A friend told me that it can be done in linear time. I don't see how that could be possible.
How would you do it ?
Well, I think I can do it in O(n log n). I can only do O(n) if the arrays are initially sorted.
First, observe that you can permute a,b,c however you like without changing the value of the expression. So let x be the smallest of a,b,c; let y be the middle of the three; and let z be the maximum. Then note that the expression just equals 2*(z-x). (Edit: This is easy to see... Once you have the three numbers in order, x < y < z, the sum is just (y-x) + (z-y) + (z-x) which equals 2*(z-x))
Thus, all we are really trying to do is find three numbers such that the outer two are as close together as possible, with the other number "sandwiched" between them.
So start by sorting all three arrays in O(n log n). Maintain an index into each array; call these i, j, and k. Initialize all three to zero. Whichever index points to the smallest value, increment that index. That is, if A[i] is smaller than B[j] and C[k], increment i; if B[j] is smallest, increment j; if C[k] is smallest, increment k. Repeat, keeping track of |A[i]-B[j]| + |B[j]-C[k]| + |C[k]-A[i]| the whole time. The smallest value you observe during this march is your answer. (When the smallest of the three is at the end of its array, stop because you are done.)
At each step, you add one to exactly one index; but you can only do this n times for each array before hitting the end. So this is at most 3*n steps, which is O(n), which is less than O(n log n), meaning the total time is O(n log n). (Or just O(n) if you can assume the arrays are sorted.)
Sketch of a proof that this works: Suppose A[I], B[J], C[K] are the a, b, c that form the actual answer; i.e., they have the minimum |a-b|+|b-c|+|c-a|. Suppose further that a > b > c; the proof for the other cases is symmetric.
Lemma: During our march, we do not increment j past J until after we increment k past K. Proof: We always increment the index of the smallest element, and when k <= K, B[J] > C[k]. So when j=J and k <= K, B[j] is not the smallest element, so we do not increment j.
Now suppose we increment k past K before i reaches I. What do things look like just before we perform that increment? Well, C[k] is the smallest of the three at that moment, because we are about to increment k. A[i] is less than or equal to A[I], because i < I and A is sorted. Finally, j <= J because k <= K (by our Lemma), so B[j] is also less than A[I]. Taken together, this means our sum-of-abs-diff at this moment is less than 2*(c-a), which is a contradiction.
Thus, we do not increment k past K until i reaches I. Therefore, at some point during our march i=I and k=K. By our Lemma, at this point j is less than or equal to J. So at this point, either B[j] is less than the other two and j will get incremented; or B[j] is between the other two and our sum is just 2*(A[i]-C[k]), which is the right answer.
This proof is sloppy; in particular, it fails to explicitly account for the case where one or more of a,b,c are equal. But I think that detail can be worked out pretty easily.
I would write a really simple program like this:
#!/usr/bin/python
import sys, os, random
A = random.sample(range(100), 10)
B = random.sample(range(100), 10)
C = random.sample(range(100), 10)
minsum = sys.maxint
for a in A:
for b in B:
for c in C:
print 'checking with a=%d b=%d c=%d' % (a, b, c)
abcsum = abs(a - b) + abs(b - c) + abs(c - a)
if abcsum < minsum:
print 'found new low sum %d with a=%d b=%d c=%d' % (abcsum, a, b, c)
minsum = abcsum
And test it over and over until I saw some pattern emerge. The pattern I found here is what would be expected: the numbers that are closest together in each set, regardless of whether the numbers are "high" or "low", are those that produce the smallest minimum sum. So it becomes a nearest-number problem. For whatever that's worth, probably not much.