I'm trying to understand a solution that I found for the following leetCode problem.
Description:
"You are given an array of prices where prices[i] is the price of a given stock on the ith day.
You want to maximize your profit by choosing a single day to buy one stock and choosing a different day in the future to sell that stock.
Return the maximum profit you can achieve from this transaction. If you cannot achieve any profit, return 0. "
Explanation:
"Input: prices = [7,1,5,3,6,4]
Output: 5
Explanation: Buy on day 2 (price = 1) and sell on day 5 (price = 6), profit = 6-1 = 5.
Note that buying on day 2 and selling on day 1 is not allowed because you must buy before you sell."
And I came across this solution that I'm trying to understand. Breaking it down at "->":
def max_profit([7,1,5,3,6,4])
value = 0
profit = 0
(1...prices.size).each do |i|
value += (prices[i] - prices[i-1])
-> So here value = 0 + (1-7 = -6)= -6 /value = -6 + (5-1=4)= -2 / value = -2+(3-5)=-4 and so on ending in -3
value = [0, value].max
-> This is what I don't get. Now value = [0, value].max and when I print it I get 0,4,2,5,3.
The way I'm seeing this is:
(in the first iteration) value = [0, -6].max, so value is 0 because 0 > than -6
but then I get 4 for the second iteration when value = [0, -2].max ... Shouldn't it be 0 again?? How am getting 0,4,2,5,3 ???
What actually happens when I do value = [0, value].max. ?
profit = value if value > profit
end
profit
end
A million thanks
the #max method for arrays returns the largest value in an array, so...[3, 7, 4].max will return 7 (the largest value)
value = [0, value].max
This is basically returning whichever is larger (zero or value) and assigning it to value. So it replaces any negative quantity in value with zero, but leaves it alone if it is a positive value.
Another way to do the same thing...
value = 0 if value < 0
Note about the condition you must buy before you sell, so basically we found the maximum among max profits (each profit i is the max among differences between each price[i] and prices after i)
so a naive solution is 2 loops
max of [
max-profit-0: max of p[1] - p[0], p[2] - p[0], ...
max-profit-1: max of p[2] - p[1], p[3] - p[2], ...
....
]
but the solution you provide is brilliant, it just need only one loop by taking advantages of below things:
profit between price 0 and price 2 == p[2] - p[0] == (p[1] - p[0]) + (p[2] - p[1])
=> that is the code value += (prices[i] - prices[i-1]) do about, it will count the profit between the current date j (current step) and the start date i
as long as the different still positive.
in case the sum above is positive at step 1 and 2: p[1] - p[0] > 0 and p[1] - p[0] + p[2] - p[1] > 0, that mean p[0] < p[1] and p[0] < p[2], we could conclude that for every p[j] after date 2 (j > 2), p[j] - p[0] always larger than p[j] - p[1] and p[j] - p[2], so we could continue count sum (profit) with the start index 0 and ignore 1 and 2, since this problem's target is finding the MAX, right ?
the code value = [0, value].max will return value as long as value is positive, then value keep move on.
in case the sum above (or profit) is positive at step 1 and negative at step 2: (p[1] - p[0]) + (p[2] - p[1]) < 0 so p[2] - p[0] < 0 and p[1] - p[0] > 0, that mean p[2] < p[0] and p[0] < p[1].
so we have p[2] < p[0] < p[1], obviously for each price p[j] after date 2 (j > 2), p[j] - p[2] always larger than p[j] - p[0] and p[j] - p[1], and so that we could ignore p[0] and p[1], since this problem's target is finding the MAX, right ?
That why we could reset value to zero to start count profit again
with the start index 2, now value from the code value += (prices[i] - prices[i-1]) in the next loop is actually p[3] - p[2], not p[3] - p[2] + p[2] - p[1]..., remember that we already cached the maximum profit of the range [0..2].
That is the code value = [0, value].max do about, it'll return 0 if value is <= 0.
[7, 1, 5, 3, 6, 4]
+ - # that mean you sure that profits
# between each [5, 3, 6, 4] and [7] always < with [1]
# so reset with date 1
good solution!
A good way to understand a program is to simply step through it with pen and paper, while keeping track of all the (relevant) state. So, let's do just that.
In our case, the relevant state are the values of the two local variables profit and value as well as the iteration variable i and by extension prices[i] and prices[i-1]. We start out with both of them being initialized to the value 0 in lines 1 and 2 of the method. Then, lines 4–8 are the loop which does all the actual work; line 10 simply returns the result. It is easy to see that lines 5–7 are the ones doing all the work, so let's focus on those:
i
line
code
what does it do?
profit
value
prices[i]
prices[i-1]
1
5
value += (prices[i] - prices[i-1])
add the price difference to the current value
0
0 -6 = -6
7
1
1
6
value = [0, value].max
set value to the maximum of 0 and its current value, IOW set value to 0 if it is negative
0
0
7
1
1
7
profit = value if value > profit
set profit to the maximum of its current value and value
0
0
7
1
2
5
value += (prices[i] - prices[i-1])
add the price difference to the current value
0
0 + 4 = 4
1
5
2
6
value = [0, value].max
set value to the maximum of 0 and its current value, IOW set value to 0 if it is negative
0
4
1
5
2
7
profit = value if value > profit
set profit to the maximum of its current value and value
4
4
1
5
3
5
value += (prices[i] - prices[i-1])
add the price difference to the current value
4
4 - 2 = 2
5
3
3
6
value = [0, value].max
set value to the maximum of 0 and its current value, IOW set value to 0 if it is negative
4
2
5
3
3
7
profit = value if value > profit
set profit to the maximum of its current value and value
4
4
5
3
4
5
value += (prices[i] - prices[i-1])
add the price difference to the current value
4
2 + 3 = 5
3
6
4
6
value = [0, value].max
set value to the maximum of 0 and its current value, IOW set value to 0 if it is negative
4
5
3
6
4
7
profit = value if value > profit
set profit to the maximum of its current value and value
5
5
3
6
5
5
value += (prices[i] - prices[i-1])
add the price difference to the current value
5
5 - 2 = 3
6
4
5
6
value = [0, value].max
set value to the maximum of 0 and its current value, IOW set value to 0 if it is negative
5
3
6
4
5
7
profit = value if value > profit
set profit to the maximum of its current value and value
5
3
6
4
What actually happens when I do value = [0, value].max. ?
The easiest way to find out what a method does, is to read its documentation [note, technically, this method is not Enumerable#max but Array#max, but they behave the same in this example]:
Returns the object in enum with the maximum value.
So, in other words, the method does what its name suggests: it returns the maximum of the Enumerable.
Note that this is not a very idiomatic way to write the code. For example, it makes no sense to use max in line 6 and if in line 7 to do the same thing: stick to one or the other, do not confuse the reader by using two different things to do the same thing. Also manually iterating over a collection is practically never done in Ruby. This would be much better expressed using Enumerable#each_cons and e.g. Enumerable#max_by or Enumerable#reduce. Actually, the perfect solution would be to use a prefix sum aka scan, but unfortunately, that is not available in Ruby's core and standard libraries.
An idiomatic version would look more like this:
def max_profit(prices)
prices.
each_cons(2).
reduce([0, 0]) do |(value, profit), (a, b)|
[[temp = value + b - a, 0].max, [temp, profit].max]
do.
last
end
or this:
def max_profit(prices)
prices.
each_cons(2).
map {|a, b| b - a }.
reduce([]) do |res, difference|
res << [(res[-1] || 0) + difference, 0].max
end.
max
end
Related
I came across a question and unable to find a feasible solution.
Image Quantization
Given a grayscale mage, each pixels color range from (0 to 255), compress the range of values to a given number of quantum values.
The goal is to do that with the minimum sum of costs needed, the cost of a pixel is defined as the absolute difference between its color and the closest quantum value for it.
Example
There are 3 rows 3 columns, image [[7,2,8], [8,2,3], [9,8 255]] quantums = 3 number of quantum values.The optimal quantum values are (2,8,255) Leading to the minimum sum of costs |7-8| + |2-2| + |8-8| + |8-8| + |2-2| + |3-2| + |9-8| + |8-8| + |255-255| = 1+0+0+0+0+1+1+0+0 = 3
Function description
Complete the solve function provided in the editor. This function takes the following 4 parameters and returns the minimum sum of costs.
n Represents the number of rows in the image
m Represents the number of columns in the image
image Represents the image
quantums Represents the number of quantum values.
Output:
Print a single integer the minimum sum of costs/
Constraints:
1<=n,m<=100
0<=image|i||j|<=255
1<=quantums<=256
Sample Input 1
3
3
7 2 8
8 2 3
9 8 255
10
Sample output 1
0
Explanation
The optimum quantum values are {0,1,2,3,4,5,7,8,9,255} Leading the minimum sum of costs |7-7| + |2-2| + |8-8| + |8-8| + |2-2| + |3-3| + |9-9| + |8-8| + |255-255| = 0+0+0+0+0+0+0+0+0 = 0
can anyone help me to reach the solution ?
Clearly if we have as many or more quantums available than distinct pixels, we can return 0 as we set at least enough quantums to each equal one distinct pixel. Now consider setting the quantum at the lowest number of the sorted, grouped list.
M = [
[7, 2, 8],
[8, 2, 3],
[9, 8, 255]
]
[(2, 2), (3, 1), (7, 1), (8, 3), (9, 1), (255, 1)]
2
We record the required sum of differences:
0 + 0 + 1 + 5 + 6 + 6 + 6 + 7 + 253 = 284
Now to update by incrementing the quantum by 1, we observe that we have a movement of 1 per element so all we need is the count of affected elements.
Incremenet 2 to 3
3
1 + 1 + 0 + 4 + 5 + 5 + 5 + 6 + 252 = 279
or
284 + 2 * 1 - 7 * 1
= 284 + 2 - 7
= 279
Consider traversing from the left with a single quantum, calculating only the effect on pixels in the sorted, grouped list that are on the left side of the quantum value.
To only update the left side when adding a quantum, we have:
left[k][q] = min(left[k-1][p] + effect(A, p, q))
where effect is the effect on the elements in A (the sorted, grouped list) as we reduce p incrementally and update the effect on the pixels in the range, [p, q] according to whether they are closer to p or q. As we increase q for each round of k, we can keep the relevant place in the sorted, grouped pixel list with a pointer that moves incrementally.
If we have a solution for
left[k][q]
where it is the best for pixels on the left side of q when including k quantums with the rightmost quantum set as the number q, then the complete candidate solution would be given by:
left[k][q] + effect(A, q, list_end)
where there is no quantum between q and list_end
Time complexity would be O(n + k * q * q) = O(n + quantums ^ 3), where n is the number of elements in the input matrix.
Python code:
def f(M, quantums):
pixel_freq = [0] * 256
for row in M:
for colour in row:
pixel_freq[colour] += 1
# dp[k][q] stores the best solution up
# to the qth quantum value, with
# considering the effect left of
# k quantums with the rightmost as q
dp = [[0] * 256 for _ in range(quantums + 1)]
pixel_count = pixel_freq[0]
for q in range(1, 256):
dp[1][q] = dp[1][q-1] + pixel_count
pixel_count += pixel_freq[q]
predecessor = [[None] * 256 for _ in range(quantums + 1)]
# Main iteration, where the full
# candidate includes both right and
# left effects while incrementing the
# number of quantums.
for k in range(2, quantums + 1):
for q in range(k - 1, 256):
# Adding a quantum to the right
# of the rightmost doesn't change
# the left cost already calculated
# for the rightmost.
best_left = dp[k-1][q-1]
predecessor[k][q] = q - 1
q_effect = 0
p_effect = 0
p_count = 0
for p in range(q - 2, k - 3, -1):
r_idx = p + (q - p) // 2
# When the distance between p
# and q is even, we reassign
# one pixel frequency to q
if (q - p - 1) % 2 == 0:
r_freq = pixel_freq[r_idx + 1]
q_effect += (q - r_idx - 1) * r_freq
p_count -= r_freq
p_effect -= r_freq * (r_idx - p)
# Either way, we add one pixel frequency
# to p_count and recalculate
p_count += pixel_freq[p + 1]
p_effect += p_count
effect = dp[k-1][p] + p_effect + q_effect
if effect < best_left:
best_left = effect
predecessor[k][q] = p
dp[k][q] = best_left
# Records the cost only on the right
# of the rightmost quantum
# for candidate solutions.
right_side_effect = 0
pixel_count = pixel_freq[255]
best = dp[quantums][255]
best_quantum = 255
for q in range(254, quantums-1, -1):
right_side_effect += pixel_count
pixel_count += pixel_freq[q]
candidate = dp[quantums][q] + right_side_effect
if candidate < best:
best = candidate
best_quantum = q
quantum_list = [best_quantum]
prev_quantum = best_quantum
for i in range(k, 1, -1):
prev_quantum = predecessor[i][prev_quantum]
quantum_list.append(prev_quantum)
return best, list(reversed(quantum_list))
Output:
M = [
[7, 2, 8],
[8, 2, 3],
[9, 8, 255]
]
k = 3
print(f(M, k)) # (3, [2, 8, 255])
M = [
[7, 2, 8],
[8, 2, 3],
[9, 8, 255]
]
k = 10
print(f(M, k)) # (0, [2, 3, 7, 8, 9, 251, 252, 253, 254, 255])
I would propose the following:
step 0
Input is:
image = 7 2 8
8 2 3
9 8 255
quantums = 3
step 1
Then you can calculate histogram from the input image. Since your image is grayscale, it can contain only values from 0-255.
It means that your histogram array has length equal to 256.
hist = int[256] // init the histogram array
for each pixel color in image // iterate over image
hist[color]++ // and increment histogram values
hist:
value 0 0 2 1 0 0 0 1 2 1 0 . . . 1
---------------------------------------------
color 0 1 2 3 4 5 6 7 8 9 10 . . . 255
How to read the histogram:
color 3 has 1 occurrence
color 8 has 2 occurrences
With tis approach, we have reduced our problem from N (amount of pixels) to 256 (histogram size).
Time complexity of this step is O(N)
step 2
Once we have histogram in place, we can calculate its # of quantums local maximums. In our case, we can calculate 3 local maximums.
For the sake of simplicity, I will not write the pseudo code, there are numerous examples on internet. Just google ('find local maximum/extrema in array'
It is important that you end up with 3 biggest local maximums. In our case it is:
hist:
value 0 0 2 1 0 0 0 1 2 1 0 . . . 1
---------------------------------------------
color 0 1 2 3 4 5 6 7 8 9 10 . . . 255
^ ^ ^
These values (2, 8, 266) are your tops of the mountains.
Time complexity of this step is O(quantums)
I could explain why it is not O(1) or O(256), since you can find local maximums in a single pass. If needed I will add a comment.
step 3
Once you have your tops of the mountains, you want to isolate each mountain in a way that it has the maximum possible surface.
So, you will do that by finding the minimum value between two tops
In our case it is:
value 0 0 2 1 0 0 0 1 2 1 0 . . . 1
---------------------------------------------
color 0 1 2 3 4 5 6 7 8 9 10 . . . 255
^ ^
| \ / \
- - _ _ _ _ . . . _ ^
So our goal is to find between index values:
from 0 to 2 (not needed, first mountain start from beginning)
from 2 to 8 (to see where first mountain ends, and second one starts)
from 8 to 255 (to see where second one ends, and third starts)
from 255 to end (just noted, also not needed, last mountain always reaches the end)
There are multiple candidates (multiple zeros), and it is not important which one you choose for minimum. Final surface of the mountain is always the same.
Let's say that our algorithm return two minimums. We will use them in next step.
min_1_2 = 6
min_2_3 = 254
Time complexity of this step is O(256). You need just a single pass over histogram to calculate all minimums (actually you will do multiple smaller iterations, but in total you visit each element only once.
Someone could consider this as O(1)
Step 4
Calculate the median of each mountain.
This can be the tricky one. Why? Because we want to calculate the median using the original values (colors) and not counters (occurrences).
There is also the formula that can give us good estimate, and this one can be performed quite fast (looking only at histogram values) (https://medium.com/analytics-vidhya/descriptive-statistics-iii-c36ecb06a9ae)
If that is not precise enough, then the only option is to "unwrap" the calculated values. Then, we could sort these "raw" pixels and easily find the median.
In our case, those medians are 2, 8, 255
Time complexity of this step is O(nlogn) if we have to sort the whole original image. If approximation works fine, then time complexity of this step is almost the constant.
step 5
This is final step.
You now know the start and end of the "mountain".
You also know the median that belongs to that "mountain"
Again, you can iterate over each mountain and calculate the DIFF.
diff = 0
median_1 = 2
median_2 = 8
median_3 = 255
for each hist value (color, count) between START and END // for first mountain -> START = 0, END = 6
// for second mountain -> START = 6, END = 254
// for third mountain -> START = 254, END = 255
diff = diff + |color - median_X| * count
Time complexity of this step is again O(256), and it can be considered as constant time O(1)
I was looking at the code for Counting Sort on GeeksForGeeks and during the final stage of the algorithm where the elements from the original array are inserted into their final locations in the sorted array (the second-to-last for loop), the input array is traversed in reverse order.
I can't seem to understand why you can't just go from the beginning of the input array to the end, like so :
for i in range(len(arr)):
output_arr[count_arr[arr[i] - min_element] - 1] = arr[i]
count_arr[arr[i] - min_element] -= 1
Is there some subtle reason for going in reverse order that I'm missing? Apologies if this is a very obvious question. I saw Counting Sort implemented in the same style here as well.
Any comments would be helpful, thank you!
Stability. With your way, the order of equal-valued elements gets reversed instead of preserved. Going over the input backwards cancels out the backwards copying (that -= 1 thing).
To process an array in forward order, the count / index array either needs to be one element larger so that the starting index is 0 or two local variables can be used. Example for integer array:
def countSort(arr):
output = [0 for i in range(len(arr))]
count = [0 for i in range(257)] # change
for i in arr:
count[i+1] += 1 # change
for i in range(256):
count[i+1] += count[i] # change
for i in range(len(arr)):
output[count[arr[i]]] = arr[i] # change
count[arr[i]] += 1 # change
return output
arr = [4,3,0,1,3,7,0,2,6,3,5]
ans = countSort(arr)
print(ans)
or using two variables, s to hold the running sum, c to hold the current count:
def countSort(arr):
output = [0 for i in range(len(arr))]
count = [0 for i in range(256)]
for i in arr:
count[i] += 1
s = 0
for i in range(256):
c = count[i]
count[i] = s
s = s + c
for i in range(len(arr)):
output[count[arr[i]]] = arr[i]
count[arr[i]] += 1
return output
arr = [4,3,0,1,3,7,0,2,6,3,5]
ans = countSort(arr)
print(ans)
Here We are Considering Stable Sort --> which is actually considering the Elements position by position.
For eg if we have array like
arr--> 5 ,8 ,3, 1, 1, 2, 6
0 1 2 3 4 5 6 7 8
count-> 0 2 1 1 0 1 1 0 1
Now we take cummulative sum of all frequencies
0 1 2 3 4 5 6 7 8
count-> 0 2 3 4 4 5 6 6 7
After Traversing the Original array , we prefer from last Since
we want to add Elements on their proper position so when we subtract the index , the Element will be added to lateral position.
But if we start traversing from beginning , then there will be no meaning for taking the cummulative sum since we are not adding according to the Elements placed. We are adding hap -hazardly which can be done even if we not take their cummulative sum.
I am solving this problem - COLCOIN - Collecting Coins on spoj.
link- https://www.spoj.com/problems/COLCOIN/
where for a given set of denominations, and money you want, the bank gives you the coins with highest denominations, until it can't anymore and then move to the next highest denomination. ex: if the denominations are [1,2,3,4,8], if you request 23 rupees, it gives you two 8 rupee coins first and as it can't give any more 8 rupee coins, moves to next denomination and gives you one 4 rupee and one 3 rupee.
The problem is to find the maximum of number of distinct denominations you can get given an input of denominations. money you request from bank is a variable, it actually shouldn't come into the picture if I am correct.
this is my idea:
try to sum up the value of lower denominations and see if they can add up to a bigger denominations,and if they are you'll never get all the smaller denominations.
ex: let's say there is 1, 2 and 5. 1+2< 5. so you can get all denominations. for 8 = 5+2+1
another: let's say there are denominations 3,4 and 5. so 3+4>5 so, we can never get all the denominations. because money will be given in denominations of 5 until the money that should be given is less than 5. and obviously you can't get 3+4= 7 rupees for something less than 5
One other idea which obviously is wrong is to start with 2nd highest denomination and find the coins which we will add upto that and return that solution+1(highest denomination).
it is not correct because, for example, [1,2,4,17,19], if we count 19 already in try to sum up others for 18, we get 1+17, only 2 denominations other than, where as 26 would have given 4 denominations 19+4+2+1
I think you can use the following approach:
Start with the lowest denomination
Check if adding the next lowest denomination exceeds the denomination after that
If the sum is smaller, add the denomination to the sum
otherwise continue and check if the denomination one step further doesn't exceed the denomination after that.
Example: 1 3 6 8 15 20
different denominations d = 1, sum = 1
1 + 3 < 6: d = 2, sum = 4
4 + 6 >= 8: d = 2, sum = 4
4 + 8 < 15: d = 3, sum = 12
12 + 15 >= 20: d = 3, sum = 12
12 + 20 < infinity: d = 4, sum = 32
=> answer is 4 (and the amount to withdraw is 32).
Implementation:
// expects the denominations to be ordered from smallest to largest
// and also expects them to be unique
function findMaxDenominationsInSingleWithdrawal(denominations) {
if (denominations.length <= 2)
return denominations.length
let sum = denominations[0], d = 1
for (let index = 1; index + 1 < denominations.length; index++) {
if (sum + denominations[index] < denominations[index + 1]) {
d++
sum += denominations[index]
}
}
return d + 1
}
console.log(findMaxDenominationsInSingleWithdrawal([1, 3, 6, 8, 15, 20]))
We are given a number x, and a set of n coins with denominations v1, v2, …, vn.
The coins are to be divided between Alice and Bob, with the restriction that each person's coins must add up to at least x.
For example, if x = 1, n = 2, and v1 = v2 = 2, then there are two possible distributions: one where Alice gets coin #1 and Bob gets coin #2, and one with the reverse. (These distributions are considered distinct even though both coins have the same denomination.)
I'm interested in counting the possible distributions. I'm pretty sure this can be done in O(nx) time and O(n+x) space using dynamic programming; but I don't see how.
Count the ways for one person to get just less than x, double it and subtract from the doubled total number of ways to divide the collection in two, (Stirling number of the second kind {n, 2}).
For example,
{2, 3, 3, 5}, x = 5
i matrix
0 2: 1
1 3: 1 (adding to 2 is too much)
2 3: 2
3 N/A (≥ x)
3 ways for one person to get
less than 5.
Total ways to partition a set
of 4 items in 2 is {4, 2} = 7
2 * 7 - 2 * 3 = 8
The Python code below uses MBo's routine. If you like this answer, please consider up-voting that answer.
# Stirling Algorithm
# Cod3d by EXTR3ME
# https://extr3metech.wordpress.com
def stirling(n,k):
n1=n
k1=k
if n<=0:
return 1
elif k<=0:
return 0
elif (n==0 and k==0):
return -1
elif n!=0 and n==k:
return 1
elif n<k:
return 0
else:
temp1=stirling(n1-1,k1)
temp1=k1*temp1
return (k1*(stirling(n1-1,k1)))+stirling(n1-1,k1-1)
def f(coins, x):
a = [1] + (x-1) * [0]
# Code by MBo
# https://stackoverflow.com/a/53418438/2034787
for c in coins:
for i in xrange(x - 1, c - 1, -1):
if a[i - c] > 0:
a[i] = a[i] + a[i - c]
return 2 * (stirling(len(coins), 2) - sum(a) + 1)
print f([2,3,3,5], 5) # 8
print f([1,2,3,4,4], 5) # 16
If sum of all coins is S, then the first person can get x..S-x of money.
Make array A of length S-x+1 and fill it with numbers of variants of changing A[i] with given coins (like kind of Coin Change problem).
To provide uniqueness (don't count C1+C2 and C2+C1 as two variants), fill array in reverse direction
A[0] = 1
for C in Coins:
for i = S-x downto C:
if A[i - C] > 0:
A[i] = A[i] + A[i - C]
//we can compose value i as i-C and C
then sum A entries in range x..S-x
Example for coins 2, 3, 3, 5 and x=5.
S = 13, S-x = 8
Array state after using coins in order:
0 1 2 3 4 5 6 7 8 //idx
1 1
1 1 1 1
1 1 2 2 1 1
1 1 2 3 1 1 3
So there are 8 variants to distribute these coins. Quick check (3' denotes the second coin 3):
2 3 3' 5
2 3' 3 5
2 3 3' 5
2 5 3 3'
3 3' 2 5
3 5 2 3'
3' 5 2 3
5 2 3 3'
You can also solve it in O(A * x^2) time and memory adding memoization to this dp:
solve(A, pos, sum1, sum2):
if (pos == A.length) return sum1 == x && sum2 == x
return solve(A, pos + 1, min(sum1 + A[pos], x), sum2) +
solve(A, pos + 1, sum1, min(sum2 + A[pos], x))
print(solve(A, 0, 0, 0))
So depending if x^2 < sum or not you could use this or the answer provided by #Mbo (in terms of time complexity). If you care more about space, this is better only when A * x^2 < sum - x
The problem is to find the contiguous subarray within an array (containing at least one number) which has the largest product.
For example, given the array [2,3,-2,4],
the contiguous subarray [2,3] has the largest product 6.
Why does the following work? Can anyone provide any insight on how to prove its correctness?
if(nums == null || nums.Length == 0)
{
throw new ArgumentException("Invalid input");
}
int max = nums[0];
int min = nums[0];
int result = nums[0];
for(int i = 1; i < nums.Length; i++)
{
int prev_max = max;
int prev_min = min;
max = Math.Max(nums[i],Math.Max(prev_max*nums[i], prev_min*nums[i]));
min = Math.Min(nums[i],Math.Min(prev_max*nums[i], prev_min*nums[i]));
result = Math.Max(result, max);
}
return result;
Start from the logic-side to understand how to solve the problem. There are two relevant traits for each subarray to consider:
If it contains a 0, the product of the subarray is aswell 0.
If the subarray contains an odd number of negative values, it's total value is negative aswell, otherwise positive (or 0, considering 0 as a positive value).
Now we can start off with the algorithm itself:
Rule 1: zeros
Since a 0 zeros out the product of the subarray, the subarray of the solution mustn't contain a 0, unless only negative values and 0 are contained in the input. This can be achieved pretty simple, since max and min are both reset to 0, as soon as a 0 is encountered in the array:
max = Math.Max(0 , Math.Max(prev_max * 0 , prev_min * 0));
min = Math.Min(0 , Math.Min(prev_max * 0 , prev_min * 0));
Will logically evaluate to 0, no matter what the so far input is.
arr: 1 1 1 1 0 1 1 1 0 1 1 1 0 1 1 0
result: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
min: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
max: 1 1 1 1 0 1 1 1 0 1 1 1 0 1 1 0
//non-zero values don't matter for Rule 1, so I just used 1
Rule 2: negative numbers
With Rule 1, we've already implicitly splitted the array into subarrays, such that a subarray consists of either a single 0, or multiple non-zero values. Now the task is to find the largest possible product inside that subarray (I'll refer to that as array from here on).
If the number of negative values in the array is even, the entire problem becomes pretty trivial: just multiply all values in the array and the result is the maximum-product of the array. For an odd number of negative values there are two possible cases:
The array contains only a single negative value: In that case either the subarray with all values with smaller index than the negative value or the subarray with all values with larger index than the negative value becomes the subarray with the maximum-value
The array contains at least 3 negative values: In that case we have to eliminate either the first negative number and all of it's predecessors, or the last negative number and all of it's successors.
Now let's have a look at the code:
max = Math.Max(nums[i] , Math.Max(prev_max * nums[i] , prev_min * nums[i]));
min = Math.Min(nums[i] , Math.Min(prev_max * nums[i] , prev_min * nums[i]));
Case 1: the evaluation of min is actually irrelevant, since the sign of the product of the array will only flip once, for the negative value. As soon as the negative number is encountered (= nums[i]), max will be nums[i], since both max and min are at least 1 and thus multiplication with nums[i] results in a number <= nums[i]. And for the first number after the negative number nums[i + 1], max will be nums[i + 1] again. Since the so far found maximum is made persistent in result (result = Math.Max(result, max);) after each step, this will automatically result in the correct result for that array.
arr: 2 3 2 -4 4 5
result: 2 6 12 12 12 20
max: 2 6 12 -4 4 20
//Omitted min, since it's irrelevant here.
Case 2: Here min becomes relevant too. Before we encounter the first negative value, min is the smallest number encountered so far in the array. After we encounter the first positive element in the array, the value turns negative. We continue to build both products (min and max) and swap them each time a negative value is encountered and keep updating result. When the last negative value of the array is encountered, result will hold the value of the subarray that eliminates the last negative value and it's successor. After the last negative value, max will be the product of the subarray that eliminates the first negative value and it's predecessors and min becomes irrelevant. Now we simply continue to multiply max with the remaining values in the array and update result until the end of the array is reached.
arr: 2 3 -4 3 -2 5 -6 3
result: 2 6 6 6 144 770 770 770
min: 2 6 -24 -72 -6 -30 -4620 ...
max: 2 6 -4 3 144 770 180 540
//min becomes irrelevant after the last negative value
Putting the pieces together
Since min and max are reset every time we encounter a 0, we can easily reuse them for each subarray that doesn't contain a 0. Thus Rule 1 is applied implicitly without interfering with Rule 2. Since result isn't reset each time a new subarray is inspected, the value will be kept persistent over all runs. Thus this algorithm works.
Hope this is understandable (To be honest, I doubt it and will try to improve the answer, if any questions appear). Sry for that monstrous answer.
Lets take assume the contiguous subarray, which produces the maximal product, is a[i], a[i+1], ..., a[j]. Since it is the array with the largest product, it is also the one suffix of a[0], a[1], ..., a[j], that produces the largest product.
The idea of your given algorithm is the following: For every prefix-array a[0], ..., a[j] find the largest suffix array. Out of these suffix arrays, take the maximal.
At the beginning, the smallest and biggest suffix-product are simply nums[0]. Then it iterates over all other numbers in the array. The largest suffix-array is always build in one of three ways. It's just the last numbers nums[i], it's the largest suffix-product of the shortened list multiplied by the last number (if nums[i] > 0), or it's the smallest (< 0) suffix-product multiplied by the last number (if nums[i] < 0). (*)
Using the helper variable result, you store the maximal such suffix-product you found so far.
(*) This fact is quite easy to proof. If you have a different case, for instance there exists a different suffix-product that produces a bigger number, than together with the last number nums[i] you create an even bigger suffix, which would be a contradiction.