Minimize total area using K rectangles in less than O(N^4) - algorithm

Given an increasing sequence of N numbers (up to T), we can use at most K rectangles (placed starting at position 0) such as for the i-th value v in the sequence, exists a rectangle in positions [v, T) with height at least i + 1.
Total area of rectangles should be the minimum that satisfies what mentioned above.
Example: given the sequence [0, 3, 4], T = 5 and K = 2 we can use:
a rectangle from 0 to 2 with height 1 (thus having an area of 3)
a rectangle from 3 to 4 with height 3 (thus having an area of 6).
Using at most 2 rectangles, we cannot get a total area smaller than 9.
This problem can be solved using DP.
int dp[MAXK+1][MAXN][MAXN];
int sequence[MAXN];
int filldp(int cur_idx, int cur_value, int cur_K) {
int res = dp[cur_K][cur_idx][cur_value];
if (res != -1) return res;
res = INF;
if (cur_idx == N - 1 && cur_value >= N)
res = min(res, (T - seq[cur_idx]) * cur_value);
else {
if (cur_idx < N - 1 && cur_value >= cur_idx + 1) {
int cur_cost = (seq[cur_idx + 1] - seq[cur_idx]) * cur_value;
res = min(res, cur_cost + filldp(cur_idx + 1, cur_value, cur_K);
}
// Try every possible height for a rectangle
if (cur_K < K)
for (int new_value = cur_value + 1; cur_value <= N; new_value++)
res = min(res, filldp(cur_idx, new_value, cur_K + 1));
}
dp[cur_K][cur_idx][cur_value] = res;
return res;
}
Unsurprisingly, this DP approach is not really fast probably due to the for cycle. However, as far as I can understand, this code should not do more than MAXK * MAXN * MAXN significative calls (i.e., not more that every cell in dp). MAXK and MAXN are both 200, so dp has 8 millions of cells, which is not too much.
Am I missing anything?
UPDATE: As pointed out by Saeed Amiri (thank you!), the code makes N^2*K significative calls, but each one is O(N). The whole algorithm is then O(N^3*K) = O(N^4).
Can we do better?

Related

Max Product of a string that requires K multiplication operators to be inserted

Maximum Product.
The input to the problem is a string Z = z1,z2.....zn where each zi is any number between 1...9 and an integer k where 0 <= k < n.
An example string is Z = 8473817, which is of length n = 7. We want to insert k multiplication operators X into the string so that the mathematical result of the expression
is the largest possible. There are n - 1 possible locations for the operators,
namely, after the ith character where i = 1,....., n - 1.
For example, for input Z = 21322 and k = 2, then one possible way to insert the X operators
is: 2 X 1 X 322 = 644, another possibility is 21 X 3 X 22 = 1386.
Design a dynamic programming to output the maximum product
obtainable from inserting exactly k multiplication operators X into the string.
You can assume that all the multiplication operations in your algorithm take
O(1) time.
I am approaching this using the Matrix Chain Multiplication method where you compute smaller subproblem along the upper diagonal.
This works when K=1 i.e. one multiplication operator is inserted.
In the picture below, I have used 8473817 as an example and shown that 8473 X 817 yields the highest product.
How do I scale this solution for K > 1 and K < N.
Update: adding a pseudo code.
let A(i,j) store the max product for the strings A(i...j) 1 < i < j < n
for i = 1 -> n:
A(i,i) = Z(i)
for s = 1 -> n-1:
for i = 1 -> n-s:
j = i + s
A(i,j) = 0
for l = i -> j-1:
A(i,j) = max (A(i,j), A(i,l) * A(l+1,j)
return A(1,n)
The above code works when k = 1. How do I scale this up when k > 1 and less than n
Update
Based on #trincot solution, I revamped the soln to not use memoization
Sub problem Defn
Let T(i) store the start offset where inserting the X operator in Z yields max value for i : 1 < i < k.
Pseudo code
`
T(0) = 0
for i = 1 -> k:
max = 0
for j = T(i-1) + 1 -> n:
result = Z[1..j] * Z[j+1..n]
if result > max
max = result
T(i) = j
val = 1
for i = 1 -> k:
val = val * Z[T(i-1)+1...T(i)]
val = val * Z[T(k)+1..n]
Your pseudo code is a dynamic programming solution where you use memoization for every possible slice of z (2 dimensions, starting and ending offset). However, you would only need to memoize the best result for any suffix of z, so you would only need one (starting) offset. A second dimension in your memoization would then be used for the value of k (the number of remaining multiplications).
So you would still need a 2-dimensional table for memoization, but one index would be for k and the other for an offset in z.
Here is an implementation in JavaScript:
function solve(z, k) {
// Initialise a kxl array (where l is the length of z), filled with zeroes.
const memo = Array.from({length: k + 1}, () => Array(z.length + 1).fill(0));
function recur(z, k) {
if (k == 0) return z;
let result = memo[k][z.length];
if (result == 0) {
for (let i = 1; i <= z.length - k; i++) {
result = Math.max(result, +z.slice(0, i) * recur(z.slice(i), k - 1));
}
memo[k][z.length] = result;
}
return result;
}
return recur(z, k);
}
// A few example runs:
console.log(solve('8473817', 1)); // 6922441
console.log(solve('21322', 2)); // 1368
console.log(solve('191111', 2)); // 10101
Bottom up
The same can be done in an iterative algorithm -- bottom-up instead of top-down. Here we can save one dimension of the memoization array, as the same array can be re-used for the next value of k as it increases from 0 to its final value:
function solve(z, k) {
const memo = Array(z.length);
// Initialise for k=0:
// the best product in a suffix is the suffix itself
for (let i = 0; i < z.length; i++) {
memo[i] = +z.slice(i);
}
for (let kk = 1; kk <= k; kk++) {
for (let i = 0; i < z.length - kk; i++) {
// find best position for multiplication
let result = 0;
for (let j = i + 1; j < z.length - kk + 1; j++) {
result = Math.max(result, +z.slice(i, j) * memo[j]);
}
memo[i] = result;
}
}
return memo[0];
}
// A few example runs:
console.log(solve('8473817', 1)); // 6922441
console.log(solve('21322', 2)); // 1368
console.log(solve('191111', 2)); // 10101
(Code not supplied because this is homework.)
You have found that you can use the method once and get a solution for k=1.
Can you do it and find the best solution ending at every position in the string?
Now can you use the output of that second generalization and a similar method to get a complete solution for k=2?
Now can you write this a loop to solve for arbitrary k?
If you can do all that, then finishing is easy.
You have n-1 positions and k operators to insert. To me that looks like a binary number with n-1 bits including k 1's and the other positions set to 0.
Systematically generate all permutations of [0..01..1], insert multiplication operators at the 1 positions and calculate the result for each permutation.

Time and Space algorithm complexity

I am coding brute force approach for one coding problem - I need to count the maximum score path in the array with maximum step k.
Input: nums = [1,-1,-2,4,-7,3], k = 2
Output: 7
Explanation: You can choose your jumps forming the subsequence [1,-1,4,3] (underlined above). The sum is 7.
And I encountered a problem with calculating complexity. My thought was that on each elemnt we may call function k times, so time and space are O(k^n), where n is length of the array. My second guess: for first element we call function at most 1 time, for second 2 times (that is if k > i) and so on. So we have sum 1 + 2 + ... + k + k + ... + k = ((1 + k) / 2)k + ((k + k) / 2) / (n-k) = O(k^2). I think the first one is correct, but I can't tell for sure why :/
Here's my Java code:
public int maxResult(int[] nums, int k) {
return maxResult(nums, k, nums.length - 1);
}
private int maxResult(int[] nums, int k, int index) {
if (index == 0)
return nums[0];
int max = Integer.MIN_VALUE;
int start = index - k < 0 ? 0 : index - k;
for ( int i = start; i < index; i++ ) {
int res = maxResult(nums, k, i);
System.out.println(i);
max = Math.max(res, max);
}
return max + nums[index];
}
The recurrence relation for your code for a particular k is
C(n) = sum(C(n-i) for i = 1...k) for n>k
C(n) = C(1) + C(2) + ... + C(n-1) for n <= k
C(1) = 1
These are the recurrence relations for the higher-order Fibonacci numbers, shifted by k-1 places. That is, C(n) = kFib(k, n+k-1). The k-Fibonacci numbers grow as Theta(alpha^n) where alpha is some constant based on k -- for k=2, alpha is the golden ratio, and as k increases, alpha gets closer and closer to 2. (Specifically, alpha is is the positive root of (x^k - x^(k-1) - ... - x - 1)).
Therefore C(n) = kFib(k, n+k-1) = Theta(alpha^(n+k)).
Because alpha is always less than 2, O(2^(n+k)) is a simple correct bound, although not a tight one.

Number of ways to write n as sum of k numbers with restrictions on each part

Title says it all.
I need to split n as sum of k parts where each part ki should be in the range of
1 <= ki <= ri for given array r.
for example -
n = 4, k = 3 and r = [2, 2, 1]
ans = 2
#[2, 1, 1], [1, 2, 1]
Order matters. (2, 1, 1) and (1, 2, 1) are different.
I taught of solving it using stars and bars method, but be because of upper bound ri i dont know to to approach it.
i implemented a direct recursion function and it works fine for small values only.
Constraints of original problem are
1 <= n <= 107
1 <= k <= 105
1 <= ri <= 51
All calculations will be done under prime Modulo.
i found a similar problem here but i don't know how to implement in program. HERE
My brute-force recursive function -
#define MAX 1000
const int md = 1e9 + 7;
vector <int> k;
vector <map<int, int>> mapper;
vector <int> hold;
int solve(int sum, int cur){
if(cur == (k.size() - 1) && sum >= 1 && sum <= k[cur]) return 1;
if(cur == (k.size() - 1) && (sum < 1 || sum > k[cur])) return 0;
if(mapper[cur].find(sum) != mapper[cur].end())
return mapper[cur][sum];
int ans = 0;
int start = 1;
for(int i=start; i<=k[cur]; ++i){
int remain = sum - i;
int seg = (k.size() - cur) - 1;
if(remain < seg) break;
int res = solve(sum - i, cur + 1);
ans = (1LL * ans + res) % md;
}
mapper[cur][sum] = ans;
return ans;
}
int main(){
for(int i=0; i<MAX; ++i) k.push_back(51); // restriction for each part default 51
mapper.resize(MAX);
cout << solve(MAX + MAX, 0) << endl;
}
Instead of using a map for storing result of computation i used a two dimensional array and it gave very good performance boost but i cannot use it because of large n and k values.
How could i improve my recursive function or what are other ways of solving this problem.
That's interesting problem.
First lets say r_i = r_i - 1, n = n - k, numbers in [0, r_i] just for convenience. Now it's possible to add some fictitious numbers to make m the power of 2 without changing answer.
Now let's represent each interval of [0, r_i] as polynomial 1 * x ^ 0 + 1 * x ^ 1 + ... + 1 * x & r_i. Now if we multiply all these polynomials, coefficient at x ^ n will be answer.
Here is structure called Number Theoretic Transform (NTT) which allows to multiply two polynomials modulo p in O(size * log(size)).
If you will just multiply it using NTT, code will work in something like O(n * k * log (k * max(r))). It's very slow.
But now our fictive numbers help. Let's use divide and conquer technics. We'll make O(log m) steps, on each step multiply 2 * i-th and 2 * i + 1-th polynomials. In the next step we'll multiply resulting polynomials of this step.
Each step works in O(k * log(k)) and there is O(log(k)) steps, so algorhitm works in O(k * log^2 (k)). It's fast asymptotically, but I'm not sure if it fits TL for this problem. I think it will work about 20 seconds on max test.

Take every k-th element from the (1 .. n) natural numbers series

For example, we have series 1, 2, 3, 4, 5. We take every 3 element =>
3, 1, 5, 2, 4 (chosen element shouldn't remain, we can take while series is not empty). Naive implementation by circle doubly linked list is not good idea cause of performance. Can you give me an advice which data structures and algorithms are more applicable?
Build a complete binary tree containing the numbers 1 to n, e.g. for n=15 that would be:
In each branch, store the number of nodes to the left of it; this will allow us to quickly find the i-th node. (You'll see that this tree has a very predictable structure and values, and generating it is much more efficient than building a same-sized binary tree with randomly-ordered values. It's also an ideal candidate for a tree-in-an-array.)
Then, to find the i-th number, start at the root node, and at every node, if i is one greater than the number of nodes to the left, you've found the i-th number, else go left (if i is not greater than the number of nodes to the left) or right (if i is more than 1 greater than the number of nodes to the left).
Whenever you go left, decrement the count of nodes to the left of this node (because we'll be removing one).
Whenever you go right, decrease the number you're looking for by the number of nodes to the left of the node, plus 1 (or plus 0 if the value in the node has been erased).
When you've found the i-th node, read its value (to add to the removal order list) and then set its value to 0. Thereafter, if the i-th node we're looking for has had its value erased, we'll go right and then take the leftmost node.
We start with a value i = k, and then every time we've erased the number in the i-th node, we'll decrement the total number of nodes and set i = (i + k - 1) % total (or if that is zero: i = total).
This gives a log2N lookup time and a total complexity of N×LogN.
Example walk-through: with n=15 (as in the image above) and k=6, the first steps are 6, 12, 3, 10, 2. At that point the situation is:
We've just removed the second number, and now i = 2 + 6 - 1 = 7. We start at the root node, which has 4 nodes to the left of it and still has its value, so we go right and subtract 5 from the 7 we're looking for and get 2. We arrive at node 12 (which has been erased) and find there are 2 nodes to the left of it, so we decrement the number of nodes to the left of it and then go left. We come to node 10 (which has been erased) and find that it has 1 node to the left of it, and 1 = 2 - 1 so this is the node we're looking for; however, since its value has been erased, we go right and subtract 1 from the 2 we're looking for and get 1. We arrive at node 11, which has 0 nodes to the left of it (because it's a leaf), and 0 = 1 - 1, so this is the node we're looking for.
We then decrement the total number of nodes from 10 to 9, and update i from 7 to (7 + 6 - 1) % 9 = 3 and go on to find the third node (which is now the one with value 5).
Here's a simple implementation in JavaScript. It solves series of 100,000 numbers in less than a second, and it could probably be made faster and more space-efficient by using a tree-in-an-array structure.
(Unlike in the explanation above, the indexes of the numbers are zero-based, to simplify the code; so index 0 is the first number in the tree, and we look for the node with a number of left-connected children that equals the target index.)
function Tree(size) { // CONSTRUCTOR
var height = Math.floor(Math.log(size) / Math.log(2));
this.root = addNode(height, 1 << height, size);
this.size = size;
function addNode(height, value, max) { // RECURSIVE TREE-BUILDER
var node = {value: value > max ? 0 : value, lower: (1 << height) - 1};
if (height--) {
node.left = addNode(height, value - (1 << height), max);
if (value < max) { // DON'T ADD UNNECESSARY RIGHT NODES
node.right = addNode(height, value + (1 << height), max);
}
}
return node;
}
}
Tree.prototype.cut = function(step) { // SEE ANSWER FOR DETAILS
var sequence = [], index = (step - 1) % this.size;
while (this.size) {
var node = this.root, target = index;
while (node.lower != target || node.value == 0) {
if (target < node.lower) {
--node.lower;
node = node.left;
} else {
target -= node.lower + (node.value ? 1 : 0);
node = node.right;
}
}
sequence.push(node.value);
node.value = 0;
index = (index + step - 1) % --this.size;
}
return sequence;
}
var tree = new Tree(15);
var sequence = tree.cut(6);
document.write("15/6→" + sequence + "<BR>");
tree = new Tree(100000);
sequence = tree.cut(123456);
document.write("100000/123456→" + sequence);
NOTE:
If you look at the tree for n=10, you'll see that the node to the right of the root has an incomplete tree with 2 nodes to its left, but the algorithm as implemented in the code example above gives it an incorrect left-node count of 3 instead of 2:
However, nodes with an incomplete tree to their left never hold a value themselves, and never have nodes to their right. So you always go left there anyway, and the fact that their left-node count is too high is of no consequence.
If you just need the last number, it's known as Josephus problem and there're well-known formulas for computing the answer in O(N) time.
I don't know if one can adapt it to run a full simulation, so I'll describe a straightforward O(N log N) solution here:
Let's keep all numbers in a treap with implicit keys. We need to find the k-th element and delete it at each step (in fact, there can be a shift, so it's more like (cur_shift + k) % cur_size, but it doesn't really matter). A treap can do it. We just need to split it into 3 parts [0, k - 1], [k, k] and [k + 1, cur_size - 1], print the number in the node that corresponds to the second part and merge the first and last part back together. It requires O(log N) time per step, so it should be good enough for the given constraints.
Here is an implementation with an array representation of the binary tree, only storing the size of the left sub-tree as node value. The input array is not actually stored, but silently assumed to be the leaves at the bottom level, below the binary tree:
function josephusPermutation(size, step) {
var len = 1 << 32 - Math.clz32(size-1), // Smallest power of 2 >= size
tree = Array(len).fill(0), // Create tree in array representation
current = 0,
skip = step - 1,
result = Array(size).fill(0),
goRight, leftSize, order, i, j;
// Initialise tree with sizes of left subtrees as node values
(function init(i) {
if (i >= len) return +(i - len < size); // Only count when within size
var left = tree[i] = init(i*2); // recursive, only store left-size
return left + (left ? init(i*2+1) : 0); // return sum of left and right
})(1);
for (j = 0; j < result.length; j++, size--) {
current = (current + skip) % size; // keep within range
order = current;
for (i = 1; i < len; i = i*2+goRight) {
leftSize = tree[i];
goRight = order >= leftSize;
if (goRight) {
order -= leftSize; // Moving rightward, counting what is at left side.
} else {
tree[i]--; // we will remove value at left side
}
}
result[j] = 1 + i - len;
}
return result;
}
var sequence = josephusPermutation(100000, 123456);
console.log(sequence.join(','));
Below is an implementation of Lei Wang and Xiaodong Wang's (2013) 1 O(n log k) algorithm (very similar to, if not based on, the algorithm by Errol Lloyd, published in 1983). The idea is to divide the original sequence into n/m binary trees of height log k. The algorithm is actually designed for the "feline" Josephus problem, where the participants can have more than one life (listed in the array variable below, global.l).
I also like the O(1) space algorithms by Knuth, Ahrens, and Kaplansky, (outlined in a master's thesis by Gregory Wilson, California State University, Hayward, 19792), which take a longer time to process, although can be quite fast depending on the parameters.
Knuth’s algorithm for J(n,d,t) (t is the ith hit), a descending sequence:
Let x1 = d * t and for k = 2,3,...,
let x_k = ⌊(d * x_(k−1) − d * n − 1) / (d − 1)⌋
Then J(n,d,t) = x_p where x_p is the first term in the sequence <= n.
Ahrens’ algorithm for J(n,d,t), an ascending sequence:
Let a1 = 1 and for k = 2,3,...
let a_k = ⌈(n − t + a_(k−1)) * d / (d − 1)⌉
If a_r is the first term in the sequence such that a_r + 1 ≥ d * t + 1
then J(n,d,t) = d * t + 1 − a_r.
Kaplansky’s algorithm for J(n,d,t):
Let Z+ be the set of positive integers and for k =1,2,...,t
define a mapping P_k : Z+ → Z+ by P_k(m) = (m+d−1)−(n−k+1)(m−k+d−1)/(n−k+1)
Then, J(n,d,t) = P1 ◦ P2 ◦···◦Pt(t).
JavaScript code:
var global = {
n: 100000,
k: 123456,
l: new Array(5).fill(1),
m: null,
b: null,
a: [],
next: [],
prev: [],
i: 0,
limit: 5,
r: null,
t: null
}
function init(params){
global.m = Math.pow(2, Math.ceil(Math.log2(params.k)));
params.b = Math.ceil(params.n / global.m);
for (let i=0; i<params.b; i++){
let s = i * global.m,
t = (i + 1) * global.m,
u = [];
for (let j=0; j<global.m; j++)
u[j] = 0;
for (let j=s; j<=Math.min(t-1,params.n-1); j++)
u[j-s] = -(j + 1);
global.a[i] = [];
build(u, global.a[i]);
t = (i + 1) % params.b;
params.next[i] = t;
params.prev[t] = i;
}
}
function build(u,v){
function count(_v, i){
if (global.m < i + 2){
if (_v[i] < 0)
return 1;
else
return 0;
} else {
_v[i] = count(_v, 2*i + 1);
_v[i] = _v[i] + count(_v, 2*i + 2);
return _v[i];
}
}
for (let i=0; i<global.m; i++)
v[global.m + i - 1] = u[i];
count(v, 0);
}
function algorithmL(n, b){
global.r = 0;
global.t = b - 1;
while (global.i < global.limit){
tree(global, global);
let j = leaf(global, global);
hit(global.i,j,global,global);
global.i = global.i + 1;
}
}
function tree(params_r,params_t){
if (params_t.t === global.next[params_t.t] && params_r.r < global.k){
params_r.r = global.k + global.a[params_t.t][0] - 1 - (global.k - params_r.r - 1) % global.a[params_t.t][0];
} else {
while (params_r.r < global.k){
params_t.t = global.next[params_t.t];
params_r.r = params_r.r + global.a[params_t.t][0];
}
}
}
function size(t,j){
if (global.a[t][j] < 0)
return 1
return global.a[t][j];
}
function leaf(params_r,params_t){
let j = 0,
nxt = params_r.r - global.k;
while (j + 1 < global.m){
let rs = size(params_t.t, 2*j + 2);
if (params_r.r - rs < global.k){
j = 2*j + 2;
} else {
j = 2*j + 1;
params_r.r = params_r.r - rs;
}
}
params_r.r = nxt;
return j;
}
function hit(i,j,params_r,params_t){
let h = -global.a[params_t.t][j];
console.log(h);
if (global.l[h-1] > 1)
global.l[h-1] = global.l[h-1] - 1;
else
kill(i,j,params_r,params_t);
}
function kill(i,j,params_r,params_t){
global.a[params_t.t][j] = 0;
while (j > 0){
j = Math.floor((j - 1) / 2);
global.a[params_t.t][j] = global.a[params_t.t][j] - 1;
}
if (params_t.t !== global.next[params_t.t]){
if (global.a[params_t.t][0] + global.a[global.next[params_t.t]][0] === global.m){
params_r.r = params_r.r + global.a[global.next[params_t.t]][0];
combine(params_t);
} else if (global.a[params_t.t][0] + global.a[global.prev[params_t.t]][0] === global.m){
t = global.prev[params_t.t];
combine(params_t);
}
}
}
function combine(params_t){
let x = global.next[params_t.t],
i = 0,
u = [];
for (let j=0; j<global.m; j++)
if (global.a[params_t.t][global.m + j - 1] < 0){
u[i] = global.a[params_t.t][global.m + j - 1];
i = i + 1;
}
for (let j=0; j<global.m; j++)
if (global.a[x][global.m + j - 1] < 0){
u[i] = global.a[x][global.m + j - 1];
i = i + 1;
}
build(u,global.a[params_t.t]);
global.next[params_t.t] = global.next[global.next[params_t.t]];
global.prev[global.next[params_t.t]] = params_t.t;
}
init(global);
algorithmL(global.n, global.b);
(1) L. Wang and X. Wang. A Comparative Study on the Algorithms for a Generalized Josephus Problem. Applied Mathematics & Information Sciences, 7, No. 4, 1451-1457 (2013).
(2) References from Wilson (1979):
Knuth, D. E., The Art of Computer Programming, Addison-Wesley, Reading Mass., Vol I Fundamental Algorithms, 1968, Ex. 22, p158; Vol. III, Sorting and Searching, Ex. 2, pp. 18-19; Vol. I, 2-nd ed., p.181.
Ahrens, W., Mathematische Unterhaltungen und Spiele, Teubner: Leipzig, 1901, Chapter 15, 286-301.
Kaplansky, I. and Herstein I.N., Matters Mathematical, Chelsea, New York, 1978, pp. 121-128.

What is maximum water colledted between two histograms?

I recently came across this problem:
You are given height of n histograms each of width 1. You have to choose any two histograms such that if it starts raining and all other histograms(except the two you have selected) are removed, then the water collected between the two histograms is maximised.
Input:
9
3 2 5 9 7 8 1 4 6
Output:
25
Between third and last histogram.
This is a variant of Trapping rain water problem.
I tried two solutions but both had worst case complexity of N^2. How can we optimise further.
Sol1: Brute force for every pair.
int maxWaterCollected(vector<int> hist, int n) {
int ans = 0;
for (int i= 0; i < n; i++) {
for (int j = i + 1; j < n; j++) {
ans = max(ans, min(hist[i], hist[j]) * (j - i - 1));
}
}
return ans;
}
Sol2: Keep a sequence of histograms in increasing order of height. For every histogram, find its best histogram in this sequence. now, if all histograms are in increasing order then this solution also becomes N^2.
int maxWaterCollected(vector<int> hist, int n) {
vector< pair<int, int> > increasingSeq(1, make_pair(hist[0], 0)); // initialised with 1st element.
int ans = 0;
for (int i = 1; i < n; i++) {
// compute best result from current increasing sequence
for (int j = 0; j < increasingSeq.size(); j++) {
ans = max(ans, min(hist[i], increasingSeq[j].first) * (i - increasingSeq[j].second - 1));
}
// add this histogram to sequence
if (hist[i] > increasingSeq.back().first) {
increasingSeq.push_back(make_pair(hist[i], i));
}
}
return ans;
}
Use 2 iterators, one from begin() and one from end() - 1.
until the 2 iterator are equal:
Compare current result with the max, and keep the max
Move the iterator with smaller value (begin -> end or end -> begin)
Complexity: O(n).
Jarod42 has the right idea, but it's unclear from his terse post why his algorithm, described below in Python, is correct:
def candidates(hist):
l = 0
r = len(hist) - 1
while l < r:
yield (r - l - 1) * min(hist[l], hist[r])
if hist[l] <= hist[r]:
l += 1
else:
r -= 1
def maxwater(hist):
return max(candidates(hist))
The proof of correctness is by induction: the optimal solution either (1) belongs to the candidates yielded so far or (2) chooses histograms inside [l, r]. The base case is simple, because all histograms are inside [0, len(hist) - 1].
Inductively, suppose that we're about to advance either l or r. These cases are symmetric, so let's assume that we're about to advance l. We know that hist[l] <= hist[r], so the value is (r - l - 1) * hist[l]. Given any other right endpoint r1 < r, the value is (r1 - l - 1) * min(hist[l], hist[r1]), which is less because r - l - 1 > r1 - l - 1 and hist[l] >= min(hist[l], hist[r1]). We can rule out all of these solutions as suboptimal, so it's safe to advance l.

Resources