Given as input, a sorted array of floats, I need to find the total number of pairs (i,j) such as A[i]*A[j]>=A[i]+A[j] for each i < j.
I already know the naive solution, using a loop inside other loop, which will give me O(n^2) algorithm, but i was wondering if there is a more optimal solution.
Here's an O(n) algorithm.
Let's look at A * B >= A + B.
When A, B <= 0, it's always true.
When A, B >= 2, it's always true.
When A >= 1, B <= 1 (or B >= 1, A <= 1), it's always false.
When 0 < A < 1, B < 0 (or 0 < B < 1, A < 0), it can be either true or false.
When 1 < A < 2, B > 0 (or 1 < B < 2, A > 0), it can be either true or false.
Here's a visualization, courtesy of Wolfram Alpha and Geobits:
Now, onto the algorithm.
* To find the pairs where one number is between 0 and 1 or 1 and 2 I do something similar to what is done for the 3SUM problem.
* "Pick 2" here is referring to combinations.
Count all the pairs where both are negative
Do a binary search to find the index of the first positive (> 0) number - O(log n).
Since we have the index, we know how many numbers are negative / zero, we simply need to pick 2 of them, so that's amountNonPositive * (amountNonPositive-1) / 2 - O(1).
Find all the pairs where one is between 0 and 1
Do a binary search to find the index of the last number < 1 - O(log n).
Start from that index as the right index and the left-most element as the left index.
Repeat this until the right index <= 0: (runs in O(n))
While the product is smaller than the sum, decrease the left index
Count all the elements greater than the left index
Decrease the right index
Find all the pairs where one is between 1 and 2
Do a binary search to find the index of the first number > 1 - O(log n).
Start from that index as the left index and the right-most element as the right index.
Repeat this until the left index >= 2: (runs in O(n))
While the product is greater than the sum, decrease the right index
Count all the elements greater than the right index
Increase the left index
Count all the pairs with both numbers >= 2
At the end of the last step, we're at the first index >= 2.
Now, from there, we just need to pick 2 of all the remaining numbers,
so it's again amountGreaterEqual2 * (amountGreaterEqual2-1) / 2 - O(1).
You can find and print the pairs (in a shorthand form) in O(n log n).
For each A[i] there is a minimum number k that satisfies the condition(1).
All values greater than k will also satisfy the condition.
Finding the lowest j such that A[j] >= k using binary search is O(log n).
So you can find and print the result like this:
(i, j)
(1, no match)
(2, no match)
(3, >=25)
(4, >=20)
(5, >=12)
(6, >6)
(7, >7)
...
(n-1, n)
If you want to print all combinations, then it is O(n^2), because the number of combinations are O(n^2).
(*) To handle negative numbers it actually needs to be a bit more complex, because the numbers that satify the equation can be more that one range.
I'm not absolutely sure how it behaves for small negative numbers, but if the number of ranges is not absolutely limited then my solution is no longer better than O(n^2).
Here's a binary search, O(n log n):
There's a breaking point for each number at A*B = A+B. You can reduce this to B = A / (A - 1). All numbers on one side or the other will fit it. It doesn't matter if there are negative numbers, etc.
If A < 1, then all numbers <= B fit.
If A > 1, then all numbers >= B fit.
If A == 1, then there is no match(divide by zero).
(Wolfram Alpha link)
So some pseudocode:
loop through i
a = A[i]
if(a == 1)
continue
if(a >= 2)
count += A.length - i
continue
j = binsearch(a / (a-1))
if(j <= i)
continue
if(a < 1)
count += j-i
if(a > 1)
count += A.length - j
Here's a O(n) algorithm that solves the problem when the array's elements are positive.
When the elements are positive, we can say that:
If A[i]*A[j] >= A[i]+A[j] when j>i then A[k]*A[j] >= A[k]+A[j] for any k that satisfies k>i (because the array is sorted).
If A[i]*A[j] < A[i]+A[j] when j>i then A[i]*A[k] < A[i]+A[k] for any k that satisfies k<j.
(these facts don't hold when both numbers are fractions, but then the condition won't be satisfied anyway)
Thus we can perform the following algorithm:
int findNumOfPairs(float A[])
{
start = 0;
end = A.length - 1;
numOfPairs = 0;
while (start != end)
{
if (A[start]*A[end] >= A[start]+A[end])
{
numOfPairs += end - start;
end--;
}
else
{
start++;
}
}
return numOfPairs;
}
How about excluding all floats that less then 1.0 first, since any number multiple with number less than 1, the x*0.3=A[i]+A[j] for each i < j, so we only need to count numbers of array to calculate the number of pairs(i, j), we can use formula about permutation and combination to calculate it. formula should be n(n-1)/2.
Related
Given two large sets A and B of scalar (floating point) values, what algorithm would you use to find the (scalar) range [x0,x1] containing zero elements from B and the maximum number of elements from A?
Is sorting complexity (O(n log n)) unavoidable?
Create a single list with all values, where each value is marked with two counts: one count that relates to set A, and another that relates to set B. Initially these counts are 1 and 0, when the value comes from set A, and 0 and 1 when the value comes from set B. So entries in this list could be tuples (value, countA, countB). This operation is O(n).
Sort these tuples. O(nlogn)
Merge tuples with duplicate values into one tuple, and accumulate the values in the corresponding counters, so that the tuple tells us how many times the value occurs in set A and how many times in set B. O(n)
Traverse this list in sorted order and maintain the largest sum of counts for countA of a series of adjacent tuples where countB is always 0, and the minimum and maximum value of that range. O(n)
The sorting is the determining factor of the time complexity: O(nlogn).
Sort both A and B in O(|A| log |A| + |B| log |B|). Then apply the following algorithm, which has complexity O(|A| + |B|):
i = j = k = 0
best_interval = (0, 1)
while i < len(B) - 1:
lo = B[i]
hi = B[i+1]
j = k # We can skip ahead from last iteration.
while j < len(A) and A[j] <= lo:
j += 1
k = j # We can skip ahead from the above loop.
while k < len(A) and A[k] < hi:
k += 1
if k - j > best_interval[1] - best_interval[0]:
best_interval = (j, k)
i += 1
x0 = A[best_interval[0]]
x1 = A[best_interval[1]-1]
It may look quadratic at a first inspection but note we never decrease j and k - it really is just a linear scan with three pointers.
I am pretty new to Dynamic Programming, but I am trying to get better. I have an exercise from a book, which asks me the following question (slightly abridged):
You want to construct sequence of length N from numbers from the set {1, 2, 3, 4, 5, 6}. However, you cannot place the number i (i = 1, 2, 3, 4, 5, 6) more than A[i] times consecutively, where A is a given array. Given the sequence length N (1 <= N <= 10^5) and the constraint array A (1 <= A[i] <= 50), how many sequences are possible?
For instance if A = {1, 2, 1, 2, 1, 2} and N = 2, this would mean you can only have one consecutive 1, two consecutive 2's, one consecutive 3, etc. Here, something like "11" is invalid since it has two consecutive 1's, whereas something like "12" or "22" are both valid. It turns out that the actual answer for this case is 33 (there are 36 total two-digit sequences, but "11", "33", and "55" are all invalid, which gives 33).
Somebody told me that one way to solve this problem is to use dynamic programming with three states. More specifically, they say to keep a 3d array dp(i, j, k) with i representing the current position we are at in the sequence, j representing the element put in position i - 1, and k representing the number of times that this element has been repeated in the block. They also told me that for the transitions, we can put in position i every element different from j, and we can only put j in if A[j] > k.
It all makes sense to me in theory, but I've been struggling with implementing this. I have no clue how to begin with the actual implementation other than initializing the matrix dp. Typically, most of the other exercises had some sort of "base case" that were manually set in the matrix, and then a loop was used to fill in the other entries.
I guess I am particularly confused because this is a 3D array.
For a moment let's just not care about the array. Let's implement this recursively. Let dp(i, j, k) be the number of sequences with length i, last element j, and k consecutive occurrences of j at the end of the array.
The question now becomes how do we write the solution of dp(i, j, k) recursively.
Well we know that we are adding a j the kth time, so we have to take each sequence of length i - 1, and has j occurring k - 1 times, and add another j to that sequence. Notice that this is simply dp(i - 1, j, k - 1).
But what if k == 1? If that's the case we can add one occurence of j to every sequence of length i - 1 that doesn't end with j. Essentially we need the sum of all dp(i, x, k), such that A[x] >= k and x != j.
This gives our recurrence relation:
def dp(i, j, k):
# this is the base case, the number of sequences of length 1
# one if k is valid, otherwise zero
if i == 1: return int(k == 1)
if k > 1:
# get all the valid sequences [0...i-1] and add j to them
return dp(i - 1, j, k - 1)
if k == 1:
# get all valid sequences that don't end with j
res = 0
for last in range(len(A)):
if last == j: continue
for n_consec in range(1, A[last] + 1):
res += dp(i - 1, last, n_consec)
return res
We know that our answer will be all valid subsequences of length N, so our final answer is sum(dp(N, j, k) for j in range(len(A)) for k in range(1, A[j] + 1))
Believe it or not this is the basis of dynamic programming. We just broke our main problem down into a set of subproblems. Of course, right now our time is exponential because of the recursion. We have two ways to lower this:
Caching, we can simply keep track of the result of each (i, j, k) and then spit out what we originally computed when it's called again.
Use an array. We can reimplement this idea with bottom-up dp, and have an array dp[i][j][k]. All of our function calls just become array accesses in a for loop. Note that using this method forces us iterate over the array in topological order which may be tricky.
There are 2 kinds of dp approaches: top-down and bottom-up
In bottom up, you fill the terminal cases in dp table and then use for loops to build up from that. Lets consider bottom-up algo to generate Fibonacci sequence. We set dp[0] = 1 and dp[1] = 1 and run a for loop from i = 2 to n.
In top down approach, we start from the "top" view of the problem and go down from there. Consider the recursive function to get n-th Fibonacci number:
def fib(n):
if n <= 1:
return 1
if dp[n] != -1:
return dp[n]
dp[n] = fib(n - 1) + fib(n - 2)
return dp[n]
Here we don't fill the complete table, but only the cases we encounter.
Why I am talking about these 2 types is because when you start learning dp, it is often difficult to come up with bottom-up approaches (like you are trying to). When this happens, first you want to come up with a top-down approach, and then try to get a bottom up solution from that.
So let's create a recursive dp function first:
# let m be size of A
# initialize dp table with all values -1
def solve(i, j, k, n, m):
# first write terminal cases
if k > A[j]:
# this means sequence is invalid. so return 0
return 0
if i >= n:
# this means a valid sequence.
return 1
if dp[i][j][k] != -1:
return dp[i][j][k]
result = 0
for num = 1 to m:
if num == j:
result += solve(i + 1, num, k + 1, n)
else:
result += solve(i + 1, num, 1, n)
dp[i][j][k] = result
return dp[i][j][k]
So we know what terminal cases are. We create a dp table of size dp[n + 1][m][50]. Initialize it with all values 0, not -1.
So we can do bottom-up as:
# initially all values in table are zero. With loop below, we set the valid endings as 1.
# So any state trying to reach valid terminal states will get 1, but invalid states will
# return the values 0
for num = 1 to m:
for occour = 1 to A[num]:
dp[n][num][occour] = 1
# now to build up from bottom, we start by filling n-1 th position
for i = n-1 to 1:
for num = 1 to m:
for occour = 1 to A[num]:
for next_num = 1 to m:
if next_num != num:
dp[i][num][occour] += dp[i + 1][next_num][1]
else:
dp[i][num][occour] += dp[i + 1][num][occour + 1]
The answer will be:
sum = 0
for num = 1 to m:
sum += dp[1][num][1]
I am sure there must be some more elegant dp solution, but I believe this answers your question. Note that I considered that k is the number of times j-th number has been repeated consecutively, correct me if I am wrong with this.
Edit:
With the given constraints the size of the table will be, in the worst case, 10^5 * 6 * 50 = 3e7. This would be > 100MB. It is workable, but can be considered too much space use (I think some kernels doesn't allow that much stack space to a process). One way to reduce it would be to use a hash-map instead of an array with top down approach since top-down doesn't visit all the states. That would be mostly true in this case, for example if A[1] is 2, then all the other states where 1 has occoured more that twice need not be stored. Ofcourse this would not save much space if A[i] has large values, say [50, 50, 50, 50, 50, 50]. Another approach would be to modify our approach a bit. We dont actually need to store the dimension k, i.e. the times j has appeared consecutively:
dp[i][j] = no of ways from i-th position if (i - 1)th position didn't have j and i-th position is j.
Then, we would need to modify our algo to be like:
def solve(i, j):
if i == n:
return 1
if i > n:
return 0
if dp[i][j] != -1
return dp[i][j]
result = 0
# we will first try 1 consecutive j, then 2 consecutive j's then 3 and so on
for count = 1 to A[j]:
for num = 1 to m:
if num != j:
result += solve(i + count, num)
dp[i][j] = result
return dp[i][j]
This approach will reduce our space complexity to O(10^6) ~= 2mb, while time complexity is still the same : O(N * 6 * 50)
The requirement is that the input will be set of integer ranging from -5 to 5, the result should give the longest subset of the integer, in which the total must be greater or equal to zero.
I can only come up with the following:
The input will be input[0 to n]
let start, longestStart, end, longestEnd, sum = 0
for i=0 to n-1
start = i
sum = input[i]
for j=1 to n
if sum + input[j] >= 0 then
end=j;
if end - start > longestEnd - longestStart then
longestStart = start;
longestEnd = end;
However this is ϴ(n^2). I would like to know what are the ways to make this loop become ϴ(n)
Thank you
Since
a - b == (a + n) - (b + n)
for any a, b or n, we can apply this to the array of numbers, keeping a running total of all elements from 0 to current. From the above equation, the sum of any subarray from index a to b is sum(elements 0-b) - sum(elements 0-a).
By keeping track of local minima and maxima, and the sums to them, you can find the subarray with the greatest range in one pass, ie O(n).
Given two arrays A and B, each containing n non-negative numbers, remove a>0 elements from the end of A and b>0 elements from the end of B. Evaluate the cost of such an operation as X*Y where X is the sum of the a elements removed from A and Y the sum of the b elements removed from B. Keep doing this until both arrays are empty. The goal is to minimize the total cost.
Using dynamic programming and the fact that an optimal strategy will always take exactly one element from either A or B I can find an O(n^3) solution. Now I'm curious to know if there is an even faster solution to this problem?
EDIT: Stealing an example from #recursive in the comments:
A = [1,9,1] and B = [1, 9, 1]. Possible to do with a cost of 20. (1) *
(1 + 9) + (9 + 1) * (1)
Here's O(n^2). Let CostA(i, j) be the min cost of eliminating A[1..i], B[1..j] in such a way that the first removal takes only one element from B. Let CostB(i, j) be the min cost of eliminating A[1..i], B[1..j] in such a way that the first removal takes only one element from A. We have mutually recursive recurrences
CostA(i, j) = A[i] * B[j] + min(CostA(i - 1, j),
CostA(i - 1, j - 1),
CostB(i - 1, j - 1))
CostB(i, j) = A[i] * B[j] + min(CostB(i, j - 1),
CostA(i - 1, j - 1),
CostB(i - 1, j - 1))
with base cases
CostA(0, 0) = 0
CostA(>0, 0) = infinity
CostA(0, >0) = infinity
CostB(0, 0) = 0
CostB(>0, 0) = infinity
CostB(0, >0) = infinity.
The answer is min(CostA(n, n), CostB(n, n)).
Given an array A of size N, how do I count the number of pairs(A[i], A[j]) such that the absolute difference between them is less than or equal to K where K is any positive natural number? (i, j<=N and i!=j)
My approach:
Sort the array.
Create another array that stores the absolute difference between two consecutive numbers.
Am I heading in the right direction? If yes, then how do I proceed further?
Here is a O(nlogn) algorithm :-
1. sort input
2. traverse the sorted array in ascending order.
3. for A[i] find largest index A[j]<=A[i]+k using binary search.
4. count = count+j-i
5. do 3 to 4 all i's
Time complexity :-
Sorting : O(n)
Binary Search : O(logn)
Overall : O(nlogn)
This is O(n^2):
Sort the array
For each item_i in array,
For each item_j in array such that j > i
If item_j - item_i <= k, print (item_j, item_i)
Else proceed with the next item_i
Your approach is partially correct. You first sort the array. Then keep two pointers i and j.
1. Initialize i = 0, j = 1.
2. Check if A[j] - A[i] <= K.
- If yes, then increment j,
- else
- **increase the count of pairs by C(j-i,2)**.
- increment i.
- if i == j, then increment j.
3. Do this till pointer j goes past the end of the array. Then just add C(j-1,2) to the count and stop.
By i and j, you are basically maintaining a window within which the difference between elements is <= K.
EDIT: This is the basic idea, you will have to check for boundary conditions. Also you will have to keep track of the past interval that was added to the count. You will need to subtract the overlap with the current interval to avoid double counting.
Complexity: O(NlogN), for the sort operation, linear for the array traversal
Once your array is sorted, you can compute the sum in O(N) time.
Here's some code. The O(N) algorithm is pair_sums, and pair_sums_slow is the obviously correct, but O(N^2) algorithm. I run through some test cases at the end to make sure that the two algorithms returns the same results.
def pair_sums(A, k):
A.sort()
counts = 0
j = 0
for i in xrange(len(A)):
while j < len(A) and A[j] - A[i] <= k:
j+=1
counts += j - i - 1
return counts
def pair_sums_slow(A, k):
counts = 0
for i in xrange(len(A)):
for j in xrange(i+1, len(A)):
if A[j] - A[i] <= k:
counts+=1
return counts
cases = [
([0, 1, 2, 3, 4, 5], 10),
([0, 0, 0, 0, 0], 1),
([0, 1, 2, 4, 8, 16], 9),
([0, -1, -2, 1, 2], 2)
]
for A, k in cases:
want = pair_sums_slow(A, k)
got = pair_sums(A, k)
if want != got:
print A, k, want, got
The idea behind pair_sums is that for each i, we find the smallest j such that A[j] - A[i] > K (or j=N). Then j-i-1 is the number of pairs with i as the first value.
Because the array is sorted, j only ever increases as i increases, so the overall complexity is linear since although there's nested loops the inner operation j+=1 can occur at most N times.