Positions on grid 'used' - algorithm

I have a puzzle to solve which involves taking input which is size of grid. Grid is always square. Then a number of points on the grid are provided and the squares on the grid are 'taken' if they are immediately left or right or above or below.
Eg imagine a grid 10 x 10. If points are (1,1) bottom left and (10,10) top right, then if a point (2,1) is given then square positions left and right (10 squares) and above and below (another 9 squares) are taken. So using simple arithmetic, if grid is n squared then n + (n-1) squares will be taken on first point provided.
But it gets complicated if other points are provided as input. Eg if next point is eg (5,5) then another 19 squares will be 'taken' minus thos squares overlapping other point. so it gets complex. and of course a point say (3,1) could be provided which overlaps more.
Is there an algorithm for this type of problem?
Or is it simply a matter of holding a 2 dimensional array and placing an x for each taken square. then at end just totting up taken (or non-taken) squares. That would work but I was wndering if there is an easier way.

Keep two sets: X (storing all x-coords) and Y (storing all y-coords). The number of squares taken will be n * (|X| + |Y|) - |X| * |Y|. This follows because each unique x-coord removes a column of n squares, and each unique y-coord removes a row of n squares. But this counts the intersections of the removed rows and columns twice, so we subtract |X| * |Y| to account for this.

One way to do it is to keep track of the positions that are taken in some data structure, for example a set.
At the first step this involves adding n + (n - 1) squares to that data structure.
At the second (third, fourth) step etc this involves checking for each square at the horizontal and vertical line for the given (x, y) whether it's already in the data structure. If not then you add it to the data structure. Otherwise, if the point is already in there, then it was taken in an earlier step.
We can actually see that the first step is just a special case of the other rounds because in the first round no points are taken yet. So in general the algorithm is to keep track of the taken points and to add any new ones to a data structure.
So in pseudocode:
Create a data structure taken_points = empty data structure (e.g., a set)
Whenever you're processing a point (x, y):
Set a counter = 0.
Given a point (x, y):
for each point (px, py) on the horizontal and vertical lines that intersect with (x, y):
check if that point is already in taken_points
if it is, then do nothing
otherwise, add (px, py) to taken_points and increment counter
You've now updated taken_points to contain all the points that are taken so far and counter is the number of points that were taken in the most recent round.

Here is the way to do it without using large space:-
rowVisited[n] = {0}
colVisited[n] = {0}
totalrows = 0 and totalcol = 0 for total rows and columns visited
total = 0; // for point taken for x,y
given point (x,y)
if(!rowVisited[x]) {
total = total + n - totalcol;
}
if(!colVisited[y]) {
total = total + n-1 - totalrows + rowVisited[x];
}
if(!rowVisited[x]) {
rowVisited[x] = 1;
totalrows++;
}
if(!colVisited[x]) {
colVisited[x] = 1;
totalcol++;
}
print total

Related

Number of ways N circles of different radius can be arranged in a line

Question: Given M points on a line separated by 11 unit. Find the number of ways N circles of different radii can be drawn so that they don't intersect or overlap or one inside another?? Provided that the centers of circles should be those MM points.
Example 1: N=3,M=6,r1=1,r2=1,r3=1 Answer: 24 ways.
Example 2: N=2,M=5 ,r1=1,r2=2 Answer: 6 ways.
Example 3: N=1,M=10,r=50. Answer =10 ways.
I found this question online and have not been able to solve it till now. Till now I have been able to only work up this much that any circle can take spaces from n−rn−r to n−2rn−2r. But among other issues how can I adjust for edge cases in which a circle with radius 33 takes n−4n−4th point, now the last point will be left untouched but I cannot place any circle with a radius greater than 1. I am not able to see any generalized mathematical solution to this.
If the center of the circles can be placed on non-integer x and y coordinates, then it is either impossible due to the length being too short, or infinitely many due to have enough length and there are infinitely many translations.
So, since you have to compute the results, I will assume that the coordinates of (M,M) are integer numbers.
If there is a single circle, then the solution is the number of points the circle can be legally placed.
If there are at least two circles, then you need to calculate the sum of the diameters and if that happens to be larger than the total length of the line we are speaking about, then you have no solution. If that is not the case, then you need to subtract the sum of diameters from the total length, getting Complementer. You also have N! permutations to compute the order of the circles. And you will have Complementer - 1 possible locations where you can distribute the gaps between the circles. The lengths of the gaps are G1, ..., Gn-1
We know that G1 + ... + Gn-1 = Complementer
The number of possible distributions of G1, ..., Gn-1 is D. The formula therefore would b
N! * D
The remaining question is: how can we compute D?
Solution:
function distr(depth, maxDepth, amount)
if (depth = maxDepth) then
return 1 //we need to put the remaining elements on the last slot
end if
sum = 1 //if we put amount here, that is a trivial case
for i = amount - 1 to 0 do
sum = distr(depth + 1, maxDepth, amount - i)
end for
return sum
end distr
You need to call distr with depth = 1, maxDepth = N-1, amout = Complementer

Algorithm required for maximizing number of squares fitting in rectangle with constraints

I need an algorithm which counts the maximum possible number of squares of length X which can be fitting inside a M x N rectangle with constraints of grid availability.
For example
when X = 2 and M = N = 8 (gray grids cannot be used)
We get maximum 10 squares fitting inside this rectangle.
like below solution there may be multiple solution spaces for maximum count.
Which algorithm can do it for me?
Is there any DP solution unlike polynomial?
Is there any name of such particular algorithm?
[NB: I assume greedy wont work here, if i am wrong correct me]
Let please describe whatever the best approach you know.
As requested, for X = 2 you can use DP with bitmasks. You can either proceed by columns or rows, so you usually take the smaller one. Let's assume that the minimum number is the number of columns.
You can do a DP in which the state is the current row, the current column and a bitmask telling which columns are blocked in the current row.
Let me explain it a bit with your example.
Original state: row = 0, column = 0, bitmask = 00000000.
Check if you can place a square in your current position, in this case you can. So the result will be max(1 + placing the square, not placing the square). If you can't, then you only have one option: not placing it.
In case you don't place the square, your next state is row = 0, column = 1, bitmask = 00000000. Notice that now the first bit is telling about the next row, not the current as we have already passed that column.
In case you place the square, your next state is row = 0, column = 2 (we jump two because of the square) and bitmask = 11000000. That bitmask tells you that the first two positions of the second row are already blocked.
Now let's consider a different state, row = 2, column = 4, bitmask = 11110000. This corresponds to the case of your solution when you are in the third row and the fifth column. The bitmask tells us that the four first cells of the next row are already blocked and that the four next cells of the current row are free.
So again, you check if you can place or not the square. The cells are not blocked, but the grid contains a marked cell, so you can't. The next state is then row = 2, column = 5, bitmask = 11110000. With that example I want you to notice that the bitmask doesn't tell you anything about cells blocked by the original grid, but just by your previous squares.
So every state is checked in constant time, thus the complexity of the algorithm is the number of states, which is O(NM * 2^(min(N, M))), sorry I forgot an extra factor in my comment.

Best way to find all points of lattice in sphere

Given a bunch of arbitrary vectors (stored in a matrix A) and a radius r, I'd like to find all integer-valued linear combinations of those vectors which land inside a sphere of radius r. The necessary coordinates I would then store in a Matrix V. So, for instance, if the linear combination
K=[0; 1; 0]
lands inside my sphere, i.e. something like
if norm(A*K) <= r then
V(:,1)=K
end
etc.
The vectors in A are sure to be the simplest possible basis for the given lattice and the largest vector will have length 1. Not sure if that restricts the vectors in any useful way but I suspect it might. - They won't have as similar directions as a less ideal basis would have.
I tried a few approaches already but none of them seem particularly satisfying. I can't seem to find a nice pattern to traverse the lattice.
My current approach involves starting in the middle (i.e. with the linear combination of all 0s) and go through the necessary coordinates one by one. It involves storing a bunch of extra vectors to keep track of, so I can go through all the octants (in the 3D case) of the coordinates and find them one by one. This implementation seems awfully complex and not very flexible (in particular it doesn't seem to be easily generalizable to arbitrary numbers of dimension - although that isn't strictly necessary for the current purpose, it'd be a nice-to-have)
Is there a nice* way to find all the required points?
(*Ideally both efficient and elegant**. If REALLY necessary, it wouldn't matter THAT much to have a few extra points outside the sphere but preferably not that many more. I definitely do need all the vectors inside the sphere. - if it makes a large difference, I'm most interested in the 3D case.
**I'm pretty sure my current implementation is neither.)
Similar questions I found:
Find all points in sphere of radius r around arbitrary coordinate - this is actually a much more general case than what I'm looking for. I am only dealing with periodic lattices and my sphere is always centered at 0, coinciding with one point on the lattice.
But I don't have a list of points but rather a matrix of vectors with which I can generate all the points.
How to efficiently enumerate all points of sphere in n-dimensional grid - the case for a completely regular hypercubic lattice and the Manhattan-distance. I'm looking for completely arbitary lattices and euclidean distance (or, for efficiency purposes, obviously the square of that).
Offhand, without proving any assertions, I think that 1) if the set of vectors is not of maximal rank then the number of solutions is infinite; 2) if the set is of maximal rank, then the image of the linear transformation generated by the vectors is a subspace (e.g., plane) of the target space, which intersects the sphere in a lower-dimensional sphere; 3) it follows that you can reduce the problem to a 1-1 linear transformation (kxk matrix on a k-dimensional space); 4) since the matrix is invertible, you can "pull back" the sphere to an ellipsoid in the space containing the lattice points, and as a bonus you get a nice geometric description of the ellipsoid (principal axis theorem); 5) your problem now becomes exactly one of determining the lattice points inside the ellipsoid.
The latter problem is related to an old problem (counting the lattice points inside an ellipse) which was considered by Gauss, who derived a good approximation. Determining the lattice points inside an ellipse(oid) is probably not such a tidy problem, but it probably can be reduced one dimension at a time (the cross-section of an ellipsoid and a plane is another ellipsoid).
I found a method that makes me a lot happier for now. There may still be possible improvements, so if you have a better method, or find an error in this code, definitely please share. Though here is what I have for now: (all written in SciLab)
Step 1: Figure out the maximal ranges as defined by a bounding n-parallelotope aligned with the axes of the lattice vectors. Thanks for ElKamina's vague suggestion as well as this reply to another of my questions over on math.se by chappers: https://math.stackexchange.com/a/1230160/49989
function I=findMaxComponents(A,r) //given a matrix A of lattice basis vectors
//and a sphere radius r,
//find the corners of the bounding parallelotope
//built from the lattice, and store it in I.
[dims,vecs]=size(A); //figure out how many vectors there are in A (and, unnecessarily, how long they are)
U=eye(vecs,vecs); //builds matching unit matrix
iATA=pinv(A'*A); //finds the (pseudo-)inverse of A^T A
iAT=pinv(A'); //finds the (pseudo-)inverse of A^T
I=[]; //initializes I as an empty vector
for i=1:vecs //for each lattice vector,
t=r*(iATA*U(:,i))/norm(iAT*U(:,i)) //find the maximum component such that
//it fits in the bounding n-parallelotope
//of a (n-1)-sphere of radius r
I=[I,t(i)]; //and append it to I
end
I=[-I;I]; //also append the minima (by symmetry, the negative maxima)
endfunction
In my question I only asked for a general basis, i.e, for n dimensions, a set of n arbitrary but linearly independent vectors. The above code, by virtue of using the pseudo-inverse, works for matrices of arbitrary shapes and, similarly, Scilab's "A'" returns the conjugate transpose rather than just the transpose of A so it equally should work for complex matrices.
In the last step I put the corresponding minimal components.
For one such A as an example, this gives me the following in Scilab's console:
A =
0.9701425 - 0.2425356 0.
0.2425356 0.4850713 0.7276069
0.2425356 0.7276069 - 0.2425356
r=3;
I=findMaxComponents(A,r)
I =
- 2.9494438 - 3.4186986 - 4.0826424
2.9494438 3.4186986 4.0826424
I=int(I)
I =
- 2. - 3. - 4.
2. 3. 4.
The values found by findMaxComponents are the largest possible coefficients of each lattice vector such that a linear combination with that coefficient exists which still land on the sphere. Since I'm looking for the largest such combinations with integer coefficients, I can safely drop the part after the decimal point to get the maximal plausible integer ranges. So for the given matrix A, I'll have to go from -2 to 2 in the first component, from -3 to 3 in the second and from -4 to 4 in the third and I'm sure to land on all the points inside the sphere (plus superfluous extra points, but importantly definitely every valid point inside) Next up:
Step 2: using the above information, generate all the candidate combinations.
function K=findAllCombinations(I) //takes a matrix of the form produced by
//findMaxComponents() and returns a matrix
//which lists all the integer linear combinations
//in the respective ranges.
v=I(1,:); //starting from the minimal vector
K=[];
next=1; //keeps track of what component to advance next
changed=%F; //keeps track of whether to add the vector to the output
while or(v~=I(2,:)) //as long as not all components of v match all components of the maximum vector
if v <= I(2,:) then //if each current component is smaller than each largest possible component
if ~changed then
K=[K;v]; //store the vector and
end
v(next)=v(next)+1; //advance the component by 1
next=1; //also reset next to 1
changed=%F;
else
v(1:next)=I(1,1:next); //reset all components smaller than or equal to the current one and
next=next+1; //advance the next larger component next time
changed=%T;
end
end
K=[K;I(2,:)]'; //while loop ends a single iteration early so add the maximal vector too
//also transpose K to fit better with the other functions
endfunction
So now that I have that, all that remains is to check whether a given combination actually does lie inside or outside the sphere. All I gotta do for that is:
Step 3: Filter the combinations to find the actually valid lattice points
function points=generatePoints(A,K,r)
possiblePoints=A*K; //explicitly generates all the possible points
points=[];
for i=possiblePoints
if i'*i<=r*r then //filter those that are too far from the origin
points=[points i];
end
end
endfunction
And I get all the combinations that actually do fit inside the sphere of radius r.
For the above example, the output is rather long: Of originally 315 possible points for a sphere of radius 3 I get 163 remaining points.
The first 4 are: (each column is one)
- 0.2425356 0.2425356 1.2126781 - 0.9701425
- 2.4253563 - 2.6678919 - 2.4253563 - 2.4253563
1.6977494 0. 0.2425356 0.4850713
so the remainder of the work is optimization. Presumably some of those loops could be made faster and especially as the number of dimensions goes up, I have to generate an awful lot of points which I have to discard, so maybe there is a better way than taking the bounding n-parallelotope of the n-1-sphere as a starting point.
Let us just represent K as X.
The problem can be represented as:
(a11x1 + a12x2..)^2 + (a21x1 + a22x2..)^2 ... < r^2
(x1,x2,...) will not form a sphere.
This can be done with recursion on dimension--pick a lattice hyperplane direction and index all such hyperplanes that intersect the r-radius ball. The ball intersection of each such hyperplane itself is a ball, in one lower dimension. Repeat. Here's the calling function code in Octave:
function lat_points(lat_bas_mx,rr)
% **globals for hyperplane lattice point recursive function**
clear global; % this seems necessary/important between runs of this function
global MLB;
global NN_hat;
global NN_len;
global INP; % matrix of interior points, each point(vector) a column vector
global ctr; % integer counter, for keeping track of lattice point vectors added
% in the pre-allocated INP matrix; will finish iteration with actual # of points found
ctr = 0; % counts number of ball-interior lattice points found
MLB = lat_bas_mx;
ndim = size(MLB)(1);
% **create hyperplane normal vectors for recursion step**
% given full-rank lattice basis matrix MLB (each vector in lattice basis a column),
% form set of normal vectors between successive, nested lattice hyperplanes;
% store them as columnar unit normal vectors in NN_hat matrix and their lengths in NN_len vector
NN_hat = [];
for jj=1:ndim-1
tmp_mx = MLB(:,jj+1:ndim);
tmp_mx = [NN_hat(:,1:jj-1),tmp_mx];
NN_hat(:,jj) = null(tmp_mx'); % null space of transpose = orthogonal to columns
tmp_len = norm(NN_hat(:,jj));
NN_hat(:,jj) = NN_hat(:,jj)/tmp_len;
NN_len(jj) = dot(MLB(:,jj),NN_hat(:,jj));
if (NN_len(jj)<0) % NN_hat(:,jj) and MLB(:,jj) must have positive dot product
% for cutting hyperplane indexing to work correctly
NN_hat(:,jj) = -NN_hat(:,jj);
NN_len(jj) = -NN_len(jj);
endif
endfor
NN_len(ndim) = norm(MLB(:,ndim));
NN_hat(:,ndim) = MLB(:,ndim)/NN_len(ndim); % the lowest recursion level normal
% is just the last lattice basis vector
% **estimate number of interior lattice points, and pre-allocate memory for INP**
vol_ppl = prod(NN_len); % the volume of the ndim dimensional lattice paralellepiped
% is just the product of the NN_len's (they amount to the nested altitudes
% of hyperplane "paralellepipeds")
vol_bll = exp( (ndim/2)*log(pi) + ndim*log(rr) - gammaln(ndim/2+1) ); % volume of ndim ball, radius rr
est_num_pts = ceil(vol_bll/vol_ppl); % estimated number of lattice points in the ball
err_fac = 1.1; % error factor for memory pre-allocation--assume max of err_fac*est_num_pts columns required in INP
INP = zeros(ndim,ceil(err_fac*est_num_pts));
% **call the (recursive) function**
% for output, global variable INP (matrix of interior points)
% stores each valid lattice point (as a column vector)
clp = zeros(ndim,1); % confirmed lattice point (start at origin)
bpt = zeros(ndim,1); % point at center of ball (initially, at origin)
rd = 1; % initial recursion depth must always be 1
hyp_fun(clp,bpt,rr,ndim,rd);
printf("%i lattice points found\n",ctr);
INP = INP(:,1:ctr); % trim excess zeros from pre-allocation (if any)
endfunction
Regarding the NN_len(jj)*NN_hat(:,jj) vectors--they can be viewed as successive (nested) altitudes in the ndim-dimensional "parallelepiped" formed by the vectors in the lattice basis, MLB. The volume of the lattice basis parallelepiped is just prod(NN_len)--for a quick estimate of the number of interior lattice points, divide the volume of the ndim-ball of radius rr by prod(NN_len). Here's the recursive function code:
function hyp_fun(clp,bpt,rr,ndim,rd)
%{
clp = the lattice point we're entering this lattice hyperplane with
bpt = location of center of ball in this hyperplane
rr = radius of ball
rd = recrusion depth--from 1 to ndim
%}
global MLB;
global NN_hat;
global NN_len;
global INP;
global ctr;
% hyperplane intersection detection step
nml_hat = NN_hat(:,rd);
nh_comp = dot(clp-bpt,nml_hat);
ix_hi = floor((rr-nh_comp)/NN_len(rd));
ix_lo = ceil((-rr-nh_comp)/NN_len(rd));
if (ix_hi<ix_lo)
return % no hyperplane intersections detected w/ ball;
% get out of this recursion level
endif
hp_ix = [ix_lo:ix_hi]; % indices are created wrt the received reference point
hp_ln = length(hp_ix);
% loop through detected hyperplanes (updated)
if (rd<ndim)
bpt_new_mx = bpt*ones(1,hp_ln) + NN_len(rd)*nml_hat*hp_ix; % an ndim by length(hp_ix) matrix
clp_new_mx = clp*ones(1,hp_ln) + MLB(:,rd)*hp_ix; % an ndim by length(hp_ix) matrix
dd_vec = nh_comp + NN_len(rd)*hp_ix; % a length(hp_ix) row vector
rr_new_vec = sqrt(rr^2-dd_vec.^2);
for jj=1:hp_ln
hyp_fun(clp_new_mx(:,jj),bpt_new_mx(:,jj),rr_new_vec(jj),ndim,rd+1);
endfor
else % rd=ndim--so at deepest level of recursion; record the points on the given 1-dim
% "lattice line" that are inside the ball
INP(:,ctr+1:ctr+hp_ln) = clp + MLB(:,rd)*hp_ix;
ctr += hp_ln;
return
endif
endfunction
This has some Octave-y/Matlab-y things in it, but most should be easily understandable; M(:,jj) references column jj of matrix M; the tic ' means take transpose; [A B] concatenates matrices A and B; A=[] declares an empty matrix.
Updated / better optimized from original answer:
"vectorized" the code in the recursive function, to avoid most "for" loops (those slowed it down a factor of ~10; the code now is a bit more difficult to understand though)
pre-allocated memory for the INP matrix-of-interior points (this speeded it up by another order of magnitude; before that, Octave was having to resize the INP matrix for every call to the innermost recursion level--for large matrices/arrays that can really slow things down)
Because this routine was part of a project, I also coded it in Python. From informal testing, the Python version is another 2-3 times faster than this (Octave) version.
For reference, here is the old, much slower code in the original posting of this answer:
% (OLD slower code, using for loops, and constantly resizing
% the INP matrix) loop through detected hyperplanes
if (rd<ndim)
for jj=1:length(hp_ix)
bpt_new = bpt + hp_ix(jj)*NN_len(rd)*nml_hat;
clp_new = clp + hp_ix(jj)*MLB(:,rd);
dd = nh_comp + hp_ix(jj)*NN_len(rd);
rr_new = sqrt(rr^2-dd^2);
hyp_fun(clp_new,bpt_new,rr_new,ndim,rd+1);
endfor
else % rd=ndim--so at deepest level of recursion; record the points on the given 1-dim
% "lattice line" that are inside the ball
for jj=1:length(hp_ix)
clp_new = clp + hp_ix(jj)*MLB(:,rd);
INP = [INP clp_new];
endfor
return
endif

Largest rectangular sub matrix with the same number

I am trying to come up with a dynamic programming algorithm that finds the largest sub matrix within a matrix that consists of the same number:
example:
{5 5 8}
{5 5 7}
{3 4 1}
Answer : 4 elements due to the matrix
5 5
5 5
This is a question I already answered here (and here, modified version). In both cases the algorithm was applied to binary case (zeros and ones), but the modification for arbitrary numbers is quite easy (but sorry, I keep the images for the binary version of the problem). You can do this very efficiently by two pass linear O(n) time algorithm - n being number of elements. However, this is not a dynamic programming - I think using dynamic programming here would be clumsy and inefficient in the end, because of the difficulties with problem decomposition, as the OP mentioned - unless its a homework - but in that case you can try to impress by this algorithm :-) as there's obviously no faster solution than O(n).
Algorithm (pictures depict binary case):
Say you want to find largest rectangle of free (white) elements.
Here follows the two pass linear O(n) time algorithm (n being number of elemets):
1) in a first pass, go by columns, from bottom to top, and for each element, denote the number of consecutive elements available up to this one:
repeat, until:
Pictures depict the binary case. In case of arbitrary numbers you hold 2 matrices - first with the original numbers and second with the auxiliary numbers that are filled in the image above. You have to check the original matrix and if you find a number different from the previous one, you just start the numbering (in the auxiliary matrix) again from 1.
2) in a second pass you go by rows, holding data structure of potential rectangles, i.e. the rectangles containing current position somewhere at the top edge. See the following picture (current position is red, 3 potential rectangles - purple - height 1, green - height 2 and yellow - height 3):
For each rectangle we keep its height k and its left edge. In other words we keep track of the sums of consecutive numbers that were >= k (i.e. potential rectangles of height k). This data structure can be represented by an array with double linked list linking occupied items, and the array size would be limited by the matrix height.
Pseudocode of 2nd pass (non-binary version with arbitrary numbers):
var m[] // original matrix
var aux[] // auxiliary matrix filled in the 1st pass
var rect[] // array of potential rectangles, indexed by their height
// the occupied items are also linked in double linked list,
// ordered by height
foreach row = 1..N // go by rows
foreach col = 1..M
if (col > 1 AND m[row, col] != m[row, col - 1]) // new number
close_potential_rectangles_higher_than(0); // close all rectangles
height = aux[row, col] // maximal height possible at current position
if (!rect[height]) { // rectangle with height does not exist
create rect[height] // open new rectangle
if (rect[height].next) // rectangle with nearest higher height
// if it exists, start from its left edge
rect[height].left_col = rect[height].next.left_col
else
rect[height].left_col = col;
}
close_potential_rectangles_higher_than(height)
end for // end row
close_potential_rectangles_higher_than(0);
// end of row -> close all rect., supposing col is M+1 now!
end for // end matrix
The function for closing rectangles:
function close_potential_rectangles_higher_than(height)
close_r = rectangle with highest height (last item in dll)
while (close_r.height > height) { // higher? close it
area = close_r.height * (col - close_r.left_col)
if (area > max_area) { // we have maximal rectangle!
max_area = area
max_topleft = [row, close_r.left_col]
max_bottomright = [row + height - 1, col - 1]
}
close_r = close_r.prev
// remove the rectangle close_r from the double linked list
}
end function
This way you can also get all maximum rectangles. So in the end you get:
And what the complexity will be? You see that the function close_potential_rectangles_higher_than is O(1) per closed rectangle. Because for each field we create 1 potential rectangle at the maximum, the total number of potential rectangles ever present in particular row is never higher than the length of the row. Therefore, complexity of this function is O(1) amortized!
So the whole complexity is O(n) where n is number of matrix elements.
A dynamic solution:
Define a new matrix A wich will store in A[i,j] two values: the width and the height of the largest submatrix with the left upper corner at i,j, fill this matrix starting from the bottom right corner, by rows bottom to top. You'll find four cases:
case 1: none of the right or bottom neighbour elements in the original matrix are equal to the current one, i.e: M[i,j] != M[i+1,j] and M[i,j] != M[i,j+1] being M the original matrix, in this case, the value of A[i,j] is 1x1
case 2: the neighbour element to the right is equal to the current one but the bottom one is different, the value of A[i,j].width is A[i+1,j].width+1 and A[i,j].height=1
case 3: the neighbour element to the bottom is equal but the right one is different, A[i,j].width=1, A[i,j].height=A[i,j+1].height+1
case 4: both neighbours are equal: A[i,j].width = min(A[i+1,j].width+1,A[i,j+1].width) and A[i,j].height = min(A[i,j+1]+1,A[i+1,j])
the size of the largest matrix that has the upper left corner at i,j is A[i,j].width*A[i,j].height so you can update the max value found while calculating the A[i,j]
the bottom row and the rightmost column elements are treated as if their neighbours to the bottom and to the right respectively are different
in your example, the resulting matrix A would be:
{2:2 1:2 1:1}
{2:1 1:1 1:1}
{1:1 1:1 1:1}
being w:h width:height
Modification to the above answer:
Define a new matrix A wich will store in A[i,j] two values: the width and the height of the largest submatrix with the left upper corner at i,j, fill this matrix starting from the bottom right corner, by rows bottom to top. You'll find four cases:
case 1: none of the right or bottom neighbour elements in the original matrix are equal to the current one, i.e: M[i,j] != M[i+1,j] and M[i,j] != M[i,j+1] being M the original matrix, in this case, the value of A[i,j] is 1x1
case 2: the neighbour element to the right is equal to the current one but the bottom one is different, the value of A[i,j].width is A[i+1,j].width+1 and A[i,j].height=1
case 3: the neighbour element to the bottom is equal but the right one is different, A[i,j].width=1, A[i,j].height=A[i,j+1].height+1
case 4: both neighbours are equal:
Three rectangles are considered:
1. A[i,j].width=A[i,j+1].width+1; A[i,j].height=1;
A[i,j].height=A[i+1,j].height+1; a[i,j].width=1;
A[i,j].width = min(A[i+1,j].width+1,A[i,j+1].width) and A[i,j].height = min(A[i,j+1]+1,A[i+1,j])
The one with the max area in the above three cases will be considered to represent the rectangle at this position.
The size of the largest matrix that has the upper left corner at i,j is A[i,j].width*A[i,j].height so you can update the max value found while calculating the A[i,j]
the bottom row and the rightmost column elements are treated as if their neighbours to the bottom and to the right respectively are different.
This question is a duplicate. I have tried to flag it as a duplicate. Here is a Python solution, which also returns the position and shape of the largest rectangular submatrix:
#!/usr/bin/env python3
import numpy
s = '''5 5 8
5 5 7
3 4 1'''
nrows = 3
ncols = 3
skip_not = 5
area_max = (0, [])
a = numpy.fromstring(s, dtype=int, sep=' ').reshape(nrows, ncols)
w = numpy.zeros(dtype=int, shape=a.shape)
h = numpy.zeros(dtype=int, shape=a.shape)
for r in range(nrows):
for c in range(ncols):
if not a[r][c] == skip_not:
continue
if r == 0:
h[r][c] = 1
else:
h[r][c] = h[r-1][c]+1
if c == 0:
w[r][c] = 1
else:
w[r][c] = w[r][c-1]+1
minw = w[r][c]
for dh in range(h[r][c]):
minw = min(minw, w[r-dh][c])
area = (dh+1)*minw
if area > area_max[0]:
area_max = (area, [(r, c, dh+1, minw)])
print('area', area_max[0])
for t in area_max[1]:
print('coord and shape', t)
Output:
area 4
coord and shape (1, 1, 2, 2)

2D coordinate normalization

I need to implement a function which normalizes coordinates. I define normalize as (please suggest a better term if Im wrong):
Mapping entries of a data set from their natural range to values between 0 and 1.
Now this was easy in one dimension:
static List<float> Normalize(float[] nums)
{
float max = Max(nums);
float min = Min(nums);
float delta = max - min;
List<float> li = new List<float>();
foreach (float i in nums)
{
li.Add((i - min) / delta);
}
return li;
}
I need a 2D version as well and that one has to keep the aspect ratio intact. But Im having some troubles figuring out the math.
Although the code posted is in C# the answers need not to be.
Thanks in advance. :)
I am posting my response as an answer because I do not have enough points to make a comment.
My interpretation of the question: How do we normalize the coordinates of a set of points in 2 dimensional space?
A normalization operation involves a "shift and scale" operation. In case of 1 dimensional space this is fairly easy and intuitive (as pointed out by #Mizipzor).
normalizedX=(originalX-minX)/(maxX-minX)
In this case we are first shifing the value by a distance of minX and then scaling it by the range which is given by (maxX-minX). The shift operation ensures that the minimum moves to 0 and the scale operation squashes the distribution such that the distribution has an upper limit of 1
In case of 2d , simply dividing by the largest dimension is not enought. Why?
Consider the simplified case with just 2 points as shown below.
The maximum value of any dimension is the Y value of point B and this 10000.
Coordinates of normalized A=>5000/10000,8000/10000 ,i.e 0.5,0.8
Coordinates of normalized A=>7000/10000,10000/10000 ,i.e 0.7,1.0
The X and Y values are all with 0 and 1. However, the distribution of the normalized values is far from uniform. The minimum value is just 0.5. Ideally this should be closer to 0.
Preferred approach for normalizing 2d coordinates
To get a more even distribution we should do a "shift" operation around the minimum of all X values and minimum of all Y values. This could be done around the mean of X and mean of Y as well. Considering the above example,
the minimum of all X is 5000
the minimum of all Y is 8000
Step 1 - Shift operation
A=>(5000-5000,8000-8000), i.e (0,0)
B=>(7000-5000,10000-8000), i.e. (2000,2000)
Step 2 - Scale operation
To scale down the values we need some maximum. We could use the diagonal AB whose length is 2000
A=>(0/2000,0/2000), i.e. (0,0)
B=>(2000/2000,2000/2000)i.e. (1,1)
What happens when there are more than 2 points?
The approach remains similar. We find the coordinates of the smallest bounding box which fits all the points.
We find the minimum value of X (MinX) and minimum value of Y (MinY) from all the points and do a shift operation. This changes the origin to the lower left corner of the bounding box.
We find the maximum value of X (MaxX) and maximum value of Y (MaxY) from all the points.
We calculate the length of the diagonal connecting (MinX,MinY) and (MaxX,MaxY) and use this value to do a scale operation.
.
length of diagonal=sqrt((maxX-minX)*(maxX-minX) + (maxY-minY)*(maxY-minY))
normalized X = (originalX - minX)/(length of diagonal)
normalized Y = (originalY - minY)/(length of diagonal)
How does this logic change if we have more than 2 dimensions?
The concept remains the same.
- We find the minimum value in each of the dimensions (X,Y,Z)
- We find the maximum value in each of the dimensions (X,Y,Z)
- Compute the length of the diagonal as a scaling factor
- Use the minimum values to shift the origin.
length of diagonal=sqrt((maxX-minX)*(maxX-minX)+(maxY-minY)*(maxY-minY)+(maxZ-minZ)*(maxZ-minZ))
normalized X = (originalX - minX)/(length of diagonal)
normalized Y = (originalY - minY)/(length of diagonal)
normalized Z = (originalZ - minZ)/(length of diagonal)
It seems you want each vector (1D, 2D or ND) to have length <= 1.
If that's the only requirement, you can just divide each vector by the length of the longest one.
double max = maximum (|vector| for each vector in 'data');
foreach (Vector v : data) {
li.add(v / max);
}
That will make the longest vector in result list to have length 1.
But this won't be equivalent of your current code for 1-dimensional case, as you can't find minimum or maximum in a set of points on the plane. Thus, no delta.
Simple idea: Find out which dimension is bigger and normalize in this dimension. The second dimension can be computed by using the ratio. This way the ratio is kept and your values are between 0 and 1.

Resources