Algorithm - Grid Map find number of sub-blocks with specific property - algorithm

I have a grid map NxN. Each cell may have the value '0' or '1'. I am trying to find the exact number of distinct rectangle sub-blocks of the map that include a specific number of '1' and this number can be between 1 and 6. I have thought of searching for each possible rectangle but this is very slow for a map of size 500x500 and the solution must be ~ 1 sec for a common desktop computer. Can someone tell me a corresponding problem so I can look for a working algorithm or better can someone suggest me a working algorithm for this problem? Thank you all in advance!

I imagine that your search of all the rectangles is slow because you are actually counting on each possible rectangle. The solution to this is not to count all rectangles, but rather create a second array of NxN which contains the count for the rectangle (0,0..x,y), call this OriginCount. Then to calculate the count for any given rectangle, you will not have to go through the rectangle and count. You can simply use
Count(a,b..c,d) = OriginCount(c,d) + OriginCount(a-1,b-1) -
OriginCount(a-1,d) - OriginCount(c,b-1)
That turns the problem of counting the ones in any given rectangle, from an N2 problem to a discrete or constant time problem, and your code gets in the order of thousands times faster (for your 500x500 case)
Mind you, in order to set up the OriginCount array, you can use the same concept, don't just go count the ones for each rectangle, from 0,0 to x,y. Rather, use the formula
OriginCount(x,y) = OriginCount(x-1,y) + OriginCount(x,y-1) - OriginCount(x-1,y-1) +
GridMap(x,y) == 1 ? 1 : 0;
Mind you, you have to account for edge cases - where x=0 or y=0.

Related

Intresting puzzle

Task definition:
I have a matrix of natural numbers. The task is to find path from the top-left corner of matrix to bottom-right corner of matrix and dial maximum score.
Rules of navigation: if you are located in [i][j] you can move:
a) to [i][j-1], [i][j+1], [i+1][j] cells and dial zero points
b) to [i+1][j+1] and dial matrix[i][j] points
Little example:
Assume you have score 50and matrix
0 3 5 3 2
4 7 2 5 2
4 3 5 2 5
Assume you are in [1][1] cell (matrix[1][1] = 7). You can navigate to:
a) [1][0] cell with 50 score
b) [1][2] cell with 50 score
c) [2][1] cell with 50 score
d) [2][2] cell with 57 score
What a problem:
I solve this task in very slow way...
I try to implement in with help of recursion. It's easy if you just want to find maximum score. Something like
public int loop(int i, int j) {
int left = loop(i, j-1);
int top = loop(i-1, j);
int diagonal = loop(i-1,j-1) + matrix[i-1][j-1];
return maximum(left, top, diagonal);
}
BUT, I want to find a path with maximum score! And it's very time/memory consuming.
Why it's time/memory consuming:
And there is one problem: I need store path-collection and pass it as a parameter to the loop method. But loop method forks on each iteration and I have to copy path-collection thee times an iteration. Otherwise, each of loop forks will modify common path-collection and finally I will have in it all possible paths. I mean if between left, top & diagonal the biggest is left that we must not to include paths linked with top and diagonal.
Question:
How to solve it in right way?
EDIT:
Actually there is no need to find the full path. It only need to find point's in which you dial a score (in which you make a diagonal moves)
You don't need dynamic programming nor brute force for this!
To see why, let's analyze the rules:
You can move in direction j freely (left & right), so there's no reason to be careful about that direction - you can move into the optimial horizontal postion whenever you want.
Once you increase i (down) there's no way back (though you can increase i without gaining points). Each increase of i should net the maximal amount of points.
You gain points by leaving a cell, but you can only ever leave a row once.
That means you can subdivide this problem and do not need dynamic programming: you can move to the optimal j location, then take one diagonal step; repeat until done.
The optimal i step is moving from a non-last cell in a row with the highest value. You can't move from the last cell because there's no diagonal move possible - so if your matrix has only one column (or row for that matter) you'll never gain points. You can't lose points because the values are natural numbers (but if negative numbers were allowed, you can still skip a row).
In more detail, the optimal path is then found by...
Does the matrix have just one column or row? Move right repeatedly without gaining points then end the program. You can't do much here.
find the maximum value in the current row, ignoring the last value.
generate 'j' moves towards a maximally-valued cell, then move diagonally.
If you're not on the last row, go back to step 2.
You're on the last row and cannot gain more points; just generate moves towards the bottom-right corner to finish your path.
That's it!
Note that there may be multiple maximal paths, your problem specification doesn't guarantee a unique solution.
EDIT: If you don't need the actual path, but just the numbers you scored, the algorithm is much easier - remove or disregard the last row and last column, then for each i (row) return the maximum value in that row.
EDIT:
I misread the question to being just moving down and to the right (ie: j could only change to j or j+1.) so this answer is wrong.
You can use dynamic programming to solve this problem. Greedy doesn't exactly work because you can only travel "down and to the right".
The naive dynamic programming solution would essentially "work backwards" in a literal sense and start from the bottom-right and compute max score when starting at that cell.
Starting from the right-left, and from bottom-up, you can compute the best score you can get from that score simply. You do this for the m x n matrix, then you start from the top left and choose the direction that has the max score.

Labelling a grid using n labels, where every label neighbours every other label

I am trying to create a grid with n separate labels, where each cell is labelled with one of the n labels such that all labels neighbour (edge-wise) all other labels somewhere in the grid (I don't care where). Labels are free to appear as many times as necessary, and I'd like the grid to be as small as possible. As an example, here's a grid for five labels, 1 to 5:
3 2 4
5 1 3
2 4 5
While generating this by hand is not too bad for small numbers of labels, it appears to be very hard to generate a grid of reasonable size for larger numbers and so I'm looking to write a program to generate them, without having to resort to a brute-force search. I imagine this must have been investigated before, but the closest I've found are De Bruijn tori, which are not quite what I'm looking for. Any help would be appreciated.
EDIT: Thanks to Benawii for the following improved description:
"Given an integer n, generate the smallest possible matrix where for every pair (x,y) where x≠y and x,y ∈ {1,...,n} there exists a pair of adjacent cells in the matrix whose values are x and y."
You can experiment with a simple greedy algorithm.
I don't think that I'm able to give you a strict mathematical prove, at least because the question is not strictly defined, but the algorithm is quite intuitive.
First, if you have 1...K numbers (K labels) then you need at least K*(K-1)/2 adjacent cells (connections) for full coverage. A matrix of size NxM generates (N-1)*M+(M-1)*N=2*N*M-(N+M) connections.
Since you didn't mention what you understand under 'smallest matrix', let's assume that you meant the area. In that case it is obvious that for the given area the square matrix will generate bigger number of connections because it has more 'inner' cells adjacent to 4 others. For example, for area 16 the matrix 4x4 is better than 2x8. 'Better' is intuitive - more connections and more chances to reach the goal. So lets use target square matrixes and expand them if needed. The above formula will become 2*N*(N-1).
Then we can experiment with the following greedy algorithm:
For input number K find the N such that 2*N*(N-1)>K*(K-1)/2. A simple school equation.
Keep an adjacency matrix M, set M[i][i]=1 for all i, and 0 for the rest of the pairs.
Initialize a resulting matrix R of size NxN, fill with 'empty value' markers, for example with -1.
Start from top-left corner and iterate right-down:
for (int i = 0; i < N; ++i)
for (int j = 0; j < N; ++j)
R[i][j];
for each such R[i][j] (which is -1 now) find such a value which will 'fit best'. Again, 'fit best' is an intuitive definition, here we understand such a value that will contribute to a new unused connection. For that reason create the set of already filled cell neighbor numbers - S, its size is 2 at most (upper and left neighbor). Then find first k such that M[x][k]=0 for both numbers x in S. If no such number then try at least one new connection, if no number even for one then both neighbors are completely covered, put some number from uncovered here - probably the one in 'worst situation' - such x where Sum(M[x][i]) is the smallest. You should also choose the one in 'worst situation' when there are several ones to choose from in any case.
After setting the value for R[i][j] don't forget to mark the new connections with numbers x from S - M[R[i][j]][x] = M[x][R[i][j]] = 1.
If the matrix is filled and there are still unmarked connections in M then append another row to the matrix and continue. If all the connections are found before the end then remove extra rows.
You can check this algorithm and see what will happen. Step 5 is the place for playing around, particularly in guessing which one to choose in equal situation (several numbers can be in equally 'worst situation').
Example:
for K=6 we need 15 connections:
N=4, we need 4x4 square matrix. The theory says that 4x3 matrix has 17 connections, so it can possibly fit, but we will try 4x4.
Here is the output of the algorithm above:
1234
5615
2413
36**
I'm not sure if you can do by 4x3, maybe yes... :)

Looking for an algorithm (version of 2-dimensional binary search)

Easy problem and known algorithm:
I have a big array with 100 members. First X members are 0, and the rest are 1. Find X.
I am solving it by a binary search: Check member 50, if it is 0 - check member 75, etc, until I find adjacent 0 and 1.
I am looking for an optimized algorithm for the same problem in 2-dimensions:
I have 2-dimensional array 100*100. Those members that are on rows 0-X AND on columns 0-Y are 0, and the rest are 1. How to find Y and X?
Edit : The optimal solution consists in two simple binary search.
I'm very sorry for the long and convoluted post I did below. What the problem fundamentally consists in is to find a point in a space that contains 100*100 elements. The best you can do is to divide at each step this space in two. You can do it in a convoluted way (the one I did in the rest of the post) But if you realize that a binary search on the X axis still divides the research space in two at each step, (the same goes for the Y axis) then you understand that it's optimal.
I still let the thing I did, and I'm sorry that I made some peremptory affirmations in it.
If you're looking for a simple algorithm (though not optimal) just run the binary search twice as suggested.
However, if you want an optimal algorithm, you can look for the boundary on X and on Y at the same time. (You have to note that the two algorithm have same asymptotical complexity, but the optimal algorithm will still be faster)
In all the following graphics, the point (0, 0) is in the bottom left corner.
Basically when you choose a point and get the result, you cut your space in two parts. When you think about it that is actually the biggest amount of information you can extract from this.
If you choose the point (the black cross) and the result is 1 (red lines), this means that the point you're looking for can not be in the gray space (thus must be in the remaining white area)
On the other hand, if the value is 0 (blue lines), this means that the point you're looking for can not be in the gray area (thus must be in the remaining white area)
So, if you get one 0 result and one 1 result, this is what you'll get :
The point you're looking for is either in rectangle 1, 2 or 3. You just need to check the two corners of rectangle 3 to know which of the 3 rectangle is the good one.
So the algorithm is the following :
Note where are the bottom left and top right corner of the rectangle you're working with.
Do a binary search along the diagonal of the rectangle until you've stumbled at least once on a 1 result and once a 0 result.
Check the 2 other corners of the rectangle 3 (you'll necessary already know the values of the two corners on the diagonal) It is possible to check only one corner to know the right rectangle (but you'll have to check the two corners if the right rectangle is the rectangle 3)
Determine if the point you're looking for is in rectangle 1, 2 or 3
Repeat by reducing the problem to the good rectangle until the final rectangle is reduced to a point : it's the value you're looking for
Edit : if you want the supremum optimality, you'd not the when you choose the point (50, 50), you do not cut the space in equal part. One is three time bigger than the other. Ideally, you'll choose a point that cuts the space in two equal regions (area-wise)
You should compute once at the beginning the value of factor = (1.0 - 1.0/sqrt(2.0)). Then when you want to cut bewteen values a and b, choose the cutting point as a + factor*(b-a). When you cut the initial 100x100 rectangle at the point (100*factor, 100*factor) the two regions will have an area (100*100)/2, thus the convergence will be quicker.
Run your binary search twice. First determine X by running binary search on the last row and then determine Y by running binary search on last column.
Simple solution: go first in X-direction and then in Y-direction.
Check (0,50); If it is 0, check (0,75); until You find adjacent 0 and 1. Then go to Y direction from there.
Second solution:
Check member (50,50). If it is 1, check (25,25), until You find 0. Continue, until You find adjacent (X,X) and (X+1,X+1) that are 0 and 1. Then test (X,X+1) and (X+1,X). Neither or one of them will be 1. If neither, You are finished. If only one, say for example (X+1,X), then You know that the box's size is between (X+1,X) and (100,X). Use binary search to find box's height.
EDIT: As Chris pointed out, it seems that the simple approach is faster.
Second solution (modified):
Check member (50,50). If it is 1, check (25,25), until You find 0. Continue, until You find adjacent (X,X) and (X+1,X+1) that are 0 and 1. Then test (X,X+1). If it is 1, then do binary search on line (X,X+1)...(X,100). Else do binary search on line (X,X)...(100,X).
Even then I am probably beating a dead horse here. If it will be faster, then by neglible amount. This is just for theoretical fun. :)
EDIT 2 As Fezvez and Chris put it, binary search divides the search space in two most efficiently; My approach divides the area to 1/4 and 3/4 pieces. Fezvez pointed out that this could be remedied by calculating the dividing factor beforehand (but that would be extra calculation). In modified version of my algorithm I choose the direction where to go (X or Y direction), which effectively also divides the search space in two, and then conduct binary search. To conclude, this shows that this approach will always be a bit slower. (and more complicated to implement.)
Thank You, Igor Oks, for interesting question. :)
Use binary search on both dimensions and the 1D case:
Start with j=50. Now the 1-D array obtained by varying i is of the desired form - so find X from 1D case.
If X = 100 (i.e. no ones), then make j=75 (middle of the range in j dimension) and repeat.
If X < 100, then you have found it. All that is left is to fix i=X and find Y from the 1D case.

Parabolic knapsack

Lets say I have a parabola. Now I also have a bunch of sticks that are all of the same width (yes my drawing skills are amazing!). How can I stack these sticks within the parabola such that I am minimizing the space it uses as much as possible? I believe that this falls under the category of Knapsack problems, but this Wikipedia page doesn't appear to bring me closer to a real world solution. Is this a NP-Hard problem?
In this problem we are trying to minimize the amount of area consumed (eg: Integral), which includes vertical area.
I cooked up a solution in JavaScript using processing.js and HTML5 canvas.
This project should be a good starting point if you want to create your own solution. I added two algorithms. One that sorts the input blocks from largest to smallest and another that shuffles the list randomly. Each item is then attempted to be placed in the bucket starting from the bottom (smallest bucket) and moving up until it has enough space to fit.
Depending on the type of input the sort algorithm can give good results in O(n^2). Here's an example of the sorted output.
Here's the insert in order algorithm.
function solve(buckets, input) {
var buckets_length = buckets.length,
results = [];
for (var b = 0; b < buckets_length; b++) {
results[b] = [];
}
input.sort(function(a, b) {return b - a});
input.forEach(function(blockSize) {
var b = buckets_length - 1;
while (b > 0) {
if (blockSize <= buckets[b]) {
results[b].push(blockSize);
buckets[b] -= blockSize;
break;
}
b--;
}
});
return results;
}
Project on github - https://github.com/gradbot/Parabolic-Knapsack
It's a public repo so feel free to branch and add other algorithms. I'll probably add more in the future as it's an interesting problem.
Simplifying
First I want to simplify the problem, to do that:
I switch the axes and add them to each other, this results in x2 growth
I assume it is parabola on a closed interval [a, b], where a = 0 and for this example b = 3
Lets say you are given b (second part of interval) and w (width of a segment), then you can find total number of segments by n=Floor[b/w]. In this case there exists a trivial case to maximize Riemann sum and function to get i'th segment height is: f(b-(b*i)/(n+1))). Actually it is an assumption and I'm not 100% sure.
Max'ed example for 17 segments on closed interval [0, 3] for function Sqrt[x] real values:
And the segment heights function in this case is Re[Sqrt[3-3*Range[1,17]/18]], and values are:
Exact form:
{Sqrt[17/6], 2 Sqrt[2/3], Sqrt[5/2],
Sqrt[7/3], Sqrt[13/6], Sqrt[2],
Sqrt[11/6], Sqrt[5/3], Sqrt[3/2],
2/Sqrt[3], Sqrt[7/6], 1, Sqrt[5/6],
Sqrt[2/3], 1/Sqrt[2], 1/Sqrt[3],
1/Sqrt[6]}
Approximated form:
{1.6832508230603465,
1.632993161855452, 1.5811388300841898, 1.5275252316519468, 1.4719601443879744, 1.4142135623730951, 1.35400640077266, 1.2909944487358056, 1.224744871391589, 1.1547005383792517, 1.0801234497346435, 1, 0.9128709291752769, 0.816496580927726, 0.7071067811865475, 0.5773502691896258, 0.4082482904638631}
What you have archived is a Bin-Packing problem, with partially filled bin.
Finding b
If b is unknown or our task is to find smallest possible b under what all sticks form the initial bunch fit. Then we can limit at least b values to:
lower limit : if sum of segment heights = sum of stick heights
upper limit : number of segments = number of sticks longest stick < longest segment height
One of the simplest way to find b is to take a pivot at (higher limit-lower limit)/2 find if solution exists. Then it becomes new higher or lower limit and you repeat the process until required precision is met.
When you are looking for b you do not need exact result, but suboptimal and it would be much faster if you use efficient algorithm to find relatively close pivot point to actual b.
For example:
sort the stick by length: largest to smallest
start 'putting largest items' into first bin thy fit
This is equivalent to having multiple knapsacks (assuming these blocks are the same 'height', this means there's one knapsack for each 'line'), and is thus an instance of the bin packing problem.
See http://en.wikipedia.org/wiki/Bin_packing
How can I stack these sticks within the parabola such that I am minimizing the (vertical) space it uses as much as possible?
Just deal with it like any other Bin Packing problem. I'd throw meta-heuristics on it (such as tabu search, simulated annealing, ...) since those algorithms aren't problem specific.
For example, if I'd start from my Cloud Balance problem (= a form of Bin Packing) in Drools Planner. If all the sticks have the same height and there's no vertical space between 2 sticks on top of each other, there's not much I'd have to change:
Rename Computer to ParabolicRow. Remove it's properties (cpu, memory, bandwith). Give it a unique level (where 0 is the lowest row). Create a number of ParabolicRows.
Rename Process to Stick
Rename ProcessAssignement to StickAssignment
Rewrite the hard constraints so it checks if there's enough room for the sum of all Sticks assigned to a ParabolicRow.
Rewrite the soft constraints to minimize the highest level of all ParabolicRows.
I'm very sure it is equivalent to bin-packing:
informal reduction
Be x the width of the widest row, make the bins 2x big and create for every row a placeholder element which is 2x-rowWidth big. So two placeholder elements cannot be packed into one bin.
To reduce bin-packing on parabolic knapsack you just create placeholder elements for all rows that are bigger than the needed binsize with size width-binsize. Furthermore add placeholders for all rows that are smaller than binsize which fill the whole row.
This would obviously mean your problem is NP-hard.
For other ideas look here maybe: http://en.wikipedia.org/wiki/Cutting_stock_problem
Most likely this is the 1-0 Knapsack or a bin-packing problem. This is a NP hard problem and most likely this problem I don't understand and I can't explain to you but you can optimize with greedy algorithms. Here is a useful article about it http://www.developerfusion.com/article/5540/bin-packing that I use to make my php class bin-packing at phpclasses.org.
Props to those who mentioned the fact that the levels could be at varying heights (ex: assuming the sticks are 1 'thick' level 1 goes from 0.1 unit to 1.1 units, or it could go from 0.2 to 1.2 units instead)
You could of course expand the "multiple bin packing" methodology and test arbitrarily small increments. (Ex: run the multiple binpacking methodology with levels starting at 0.0, 0.1, 0.2, ... 0.9) and then choose the best result, but it seems like you would get stuck calulating for an infinite amount of time unless you had some methodlogy to verify that you had gotten it 'right' (or more precisely, that you had all the 'rows' correct as to what they contained, at which point you could shift them down until they met the edge of the parabola)
Also, the OP did not specify that the sticks had to be laid horizontally - although perhaps the OP implied it with those sweet drawings.
I have no idea how to optimally solve such an issue, but i bet there are certain cases where you could randomly place sticks and then test if they are 'inside' the parabola, and it would beat out any of the methodologies relying only on horizontal rows.
(Consider the case of a narrow parabola that we are trying to fill with 1 long stick.)
I say just throw them all in there and shake them ;)

Find the "largest" dense sub matrix in a large sparse matrix

Given a large sparse matrix (say 10k+ by 1M+) I need to find a subset, not necessarily continuous, of the rows and columns that form a dense matrix (all non-zero elements). I want this sub matrix to be as large as possible (not the largest sum, but the largest number of elements) within some aspect ratio constraints.
Are there any known exact or aproxamate solutions to this problem?
A quick scan on Google seems to give a lot of close-but-not-exactly results. What terms should I be looking for?
edit: Just to clarify; the sub matrix need not be continuous. In fact the row and column order is completely arbitrary so adjacency is completely irrelevant.
A thought based on Chad Okere's idea
Order the rows from largest count to smallest count (not necessary but might help perf)
Select two rows that have a "large" overlap
Add all other rows that won't reduce the overlap
Record that set
Add whatever row reduces the overlap by the least
Repeat at #3 until the result gets to small
Start over at #2 with a different starting pair
Continue until you decide the result is good enough
I assume you want something like this. You have a matrix like
1100101
1110101
0100101
You want columns 1,2,5,7 and rows 1 and 2, right? That submatrix would 4x2 with 8 elements. Or you could go with columns 1,5,7 with rows 1,2,3 which would be a 3x3 matrix.
If you want an 'approximate' method, you could start with a single non-zero element, then go on to find another non-zero element and add it to your list of rows and columns. At some point you'll run into a non-zero element that, if it's rows and columns were added to your collection, your collection would no longer be entirely non-zero.
So for the above matrix, if you added 1,1 and 2,2 you would have rows 1,2 and columns 1,2 in your collection. If you tried to add 3,7 it would cause a problem because 1,3 is zero. So you couldn't add it. You could add 2,5 and 2,7 though. Creating the 4x2 submatrix.
You would basically iterate until you can't find any more new rows and columns to add. That would get you too a local minimum. You could store the result and start again with another start point (perhaps one that didn't fit into your current solution).
Then just stop when you can't find any more after a while.
That, obviously, would take a long time, but I don't know if you'll be able to do it any more quickly.
I know you aren't working on this anymore, but I thought someone might have the same question as me in the future.
So, after realizing this is an NP-hard problem (by reduction to MAX-CLIQUE) I decided to come up with a heuristic that has worked well for me so far:
Given an N x M binary/boolean matrix, find a large dense submatrix:
Part I: Generate reasonable candidate submatrices
Consider each of the N rows to be a M-dimensional binary vector, v_i, where i=1 to N
Compute a distance matrix for the N vectors using the Hamming distance
Use the UPGMA (Unweighted Pair Group Method with Arithmetic Mean) algorithm to cluster vectors
Initially, each of the v_i vectors is a singleton cluster. Step 3 above (clustering) gives the order that the vectors should be combined into submatrices. So each internal node in the hierarchical clustering tree is a candidate submatrix.
Part II: Score and rank candidate submatrices
For each submatrix, calculate D, the number of elements in the dense subset of the vectors for the submatrix by eliminating any column with one or more zeros.
Select the submatrix that maximizes D
I also had some considerations regarding the min number of rows that needed to be preserved from the initial full matrix, and I would discard any candidate submatrices that did not meet this criteria before selecting a submatrix with max D value.
Is this a Netflix problem?
MATLAB or some other sparse matrix libraries might have ways to handle it.
Is your intent to write your own?
Maybe the 1D approach for each row would help you. The algorithm might look like this:
Loop over each row
Find the index of the first non-zero element
Find the index of the non-zero row element with the largest span between non-zero columns in each row and store both.
Sort the rows from largest to smallest span between non-zero columns.
At this point I start getting fuzzy (sorry, not an algorithm designer). I'd try looping over each row, lining up the indexes of the starting point, looking for the maximum non-zero run of column indexes that I could.
You don't specify whether or not the dense matrix has to be square. I'll assume not.
I don't know how efficient this is or what its Big-O behavior would be. But it's a brute force method to start with.
EDIT. This is NOT the same as the problem below.. My bad...
But based on the last comment below, it might be equivilent to the following:
Find the furthest vertically separated pair of zero points that have no zero point between them.
Find the furthest horizontally separated pair of zero points that have no zeros between them ?
Then the horizontal region you're looking for is the rectangle that fits between these two pairs of points?
This exact problem is discussed in a gem of a book called "Programming Pearls" by Jon Bentley, and, as I recall, although there is a solution in one dimension, there is no easy answer for the 2-d or higher dimensional variants ...
The 1=D problem is, effectively, find the largest sum of a contiguous subset of a set of numbers:
iterate through the elements, keeping track of a running total from a specific previous element, and the maximum subtotal seen so far (and the start and end elemnt that generateds it)... At each element, if the maxrunning subtotal is greater than the max total seen so far, the max seen so far and endelemnt are reset... If the max running total goes below zero, the start element is reset to the current element and the running total is reset to zero ...
The 2-D problem came from an attempt to generate a visual image processing algorithm, which was attempting to find, within a stream of brightnesss values representing pixels in a 2-color image, find the "brightest" rectangular area within the image. i.e., find the contained 2-D sub-matrix with the highest sum of brightness values, where "Brightness" was measured by the difference between the pixel's brighness value and the overall average brightness of the entire image (so many elements had negative values)
EDIT: To look up the 1-D solution I dredged up my copy of the 2nd edition of this book, and in it, Jon Bentley says "The 2-D version remains unsolved as this edition goes to print..." which was in 1999.

Resources