Comparing Blob's centroid with the center of the bounding box - image

I'm trying to compare the blob's centroid with a small window centered in the middle of the blobs' bounding box. The dimensions of this window is 20% of the dimensions of the bounding box.
I implemented this algorithm first, to find the blob centroid
and this is the code:
For y = 0 To bmp.ScaleHeight - 1
For x = 0 To bmp.ScaleWidth - 1
If bmp.Point(x, y) = vbWhite
Then
Xs = Xs + x
Ys = Ys + y
area = area + 1
endIF
Next x
Next y
YCenteroid = Ys / area
XCentroid = Xs / area
Then I found the width and the height of the blob using
BlobHeight = MaxY - MinY
BlobWidth = MaxX - MinX
I have now the bounding box and the centroid How can I compare where is the centroid inside or outside the small centered box about 20% of the bounding box ?

You have the edges of the bounding box:
MinX MaxX
| |
########-MinY
# #
# #
# #
########-MaxY
Given BlobWidth, we know that the centered box starts at .4*BlobWidth, continues for .2*BlobWidth (up to (.4+.2)*BlobWidth = .6*BlobWidth).
MinCenteredX = MinX + 0.4*BlobWidth
MaxCenteredX = MinX + 0.6*BlobWidth
Now you just have to check if XCentroid is between them, that is:
MinCenteredX <= XCentroid And XCentroid <= MaxCenteredX
Now do the same again for the Y coordinates and you're done.

Related

How to find the centroid of pixels with same color in a 360° video?

Let I be a w x h frame from a 360° video stream.
Let R be a red rectangle on that frame. R is smaller than the width of the image.
To compute the centroid of this rectangle we need to distinguish two cases:
case 1 where R is on the edges
case 2 where R is fully inside the frame
As you can see there will be a problem to compute the centroid with classical methods in case 1. Please note that I only care about horizontal overlapping.
For the moment I am doing like this. First we detect the first point we find and use it as a reference, then we normalize dx which is the difference between a point and the reference and then we accumulate:
width = frame.width
rectangle_pixel = (255,0,0)
first_found_coord = (-1,-1)
centroid = (0,0)
centroid_count = 0
for pixel, coordinates in image:
if(pixel != rectangle_pixel):
continue
if(first_found_coord == (-1,-1)):
first_found_coord = coordinates
centroid = coordinates
continue
dx = coordinates.x - first_found_coord.x
if(dx > width/2):
dx -= width
else if(dx < - width/2):
dx -= width
centroid += (dx, coordinates.y)
centroid_count++
final_centroid = centroid / centroid_count
But it doesn't work as expected. Where is the problem, is there a faster solution ?
Here is a solution based on transition points, i.e when you move from red to non red, or in the other way. To capture the horizontal center, I needed the following information :
gridSize.x : width of the space where rectangles can live.
w : width of your rectangle.
PseudoCode:
redPixel = (255,0,0);
transitionPoints = [];
betweenTransitionsColor = -1;
// take i and i+1 pixel+position, increment i by one at each step.
for (pixel1, P1), (pixel1, P2) in gridX : // horizontal points for a fixed `y`
if pixel1 != pixel2: // one is red, the other white
nonRedPosition = (pixel1 != redPixel ? P1 : P2)
transitionPoints.append(nonRedPosition)
continue
if(transitionPoints.length == 1 && betweenTransitionsColor == -1):
betweenTransitionsColor = pixel2
if transitionPoints.length == 2:
break
//Case where your rectangle is on the edge (left or right)
if(transitionPoints.length == 1):
if(abs(transitionPoints[0].x - w) < 2):
xCenter = w/2
else:
xCenter = gridSize.x - w/2
else:
[tP1, tP2] = transitionPoints
// case 1 : The rectangle is splitted
if betweenTransitionsColor != redPixel:
xCenter = (tP2.x - gridSize.x + tP1.x)/2
else:
xCenter = (tP1.x + tP1.x)/2
Note :
you must start at a y position where you could get red pixels. This shouldn't be very hard to achieve. If your rectangle's height is bigger than gridSize.y/2, you can begin at gridSize.y/2. Otherwise, you can search for a first red pixel, and set y to the corresponding position.
Since I'm computing the bounding boxes in the same scope, I do it in two steps.
I first accumulate the coordinates of the pixels of interest. Then when I'm checking for overlapping bounding boxes, I subtract the with for each overlapping colors on the right half of the image. So I end up with a completed but slided rectangle.
At the end I divide by the number of point found per color. If the result is negative I shift it by the size of width of the image.
Alternatively:
def get_centroid(image, interest_color):
acc_x = 0
acc_y = 0
count = 0
first_pixel = (0,0)
for (x,y, color) in image:
if(color not in interest_color):
continue
if(count == 0):
first_pixel = (x,y)
dx = x - first_pixel.x
if(dx > L/2)
dx -= L
else if (dx < -L/2)
dx += L
acc_x += x
acc_y += y
count++
non_scaled_result = acc_x / count, acc_y / count
result = non_scaled_result + first_pixel
return result

Check for pixel values in a neighborhood

I'm trying to write a MATLAB script that does the following:
Given: pixel coordinates(x,y) for a .jpg image
Goal: Check, within a 5 pixel radius of given coordinates, if there is a pixel of a certain value.
For example, let's say I'm given the coordinates (100,100), then I want to check the neighborhood of (100,100) within my image for any pixels that are black (0,0,0). So perhaps, pixel (103, 100) and (104,100) might have the value (0,0,0).
Current code:
x_coord = uint32(coord(:,1));
y_coord = uint32(coord(:,2));
count = 0;
for i = 1:length(x_coord)
%(img(x,y) returns pixel value at that (x,y)
%Note 0 = black. Indicating that, at that position, the image is just
% black
if img(x_coord(i),y_coord(i)) == 0
count = count + 1;
end
end
It currently only checks at an exact location. Not in a local neighborhood. How to could I extend this?
EDIT: Also note, as long as there as at least one pixel in the neighborhood with the value, I increment count. I'm not trying to enumerate how many pixels in the neighborhood have that value, just trying to find evidence of at least one pixel that has that value.
EDIT:
Even though I am unable to identify an error with the code, I am not able to get the exact results I want. Here is the code I am using.
val = 0; %pixel value to check
N = 50; % neighbourhood radius
%2D grid of coordinates surrounding center coordinate
[R, C] = ndgrid(1 : size(img, 1), 1 : size(img, 2));
for kk = 1 : size(coord, 1)
r = coord(kk, 1); c = coord(kk, 2); % Get pixel locations
% mask of valid locations within the neighbourhood (avoid boundary problems)
mask = (R - r).^2 + (C - c).^2 <= N*N;
pix = img(mask); % Get the valid pixels
valid = any(pix(:) ~= val);
% Add either 0 or 1 depending if we have found any matching pixels
if(valid == 1)
img = insertMarker(img, [r c], 'x', 'color', 'red', 'size', 10);
imwrite(img, images(i).name,'tiff');
end
count = count + valid;
end
An easier way to do this would be to use indexing to grab a neighbourhood, then to check to see if any of the pixels in the neighbourhood have the value that you're looking for, use any on a flattened version of this neighbourhood. The trick with grabbing the right neighbourhood is to first generate a 2D grid of coordinates that span the entire dimensions of your image, then simply use the equation of a circle with the centre of it being each coordinate you are looking at and determine those locations that satisfy the following equation:
(x - a)^2 + (y - b)^2 <= N^2
N is the radius of the observation window, (a, b) is a coordinate of interest while (x, y) is a coordinate in the image. Use meshgrid to generate the coordinates.
You would use the above equation to create a logical mask, index into your image to pull the locations that are valid within the mask and check how many pixels match the one you want. Another added benefit with the above approach is that you are not subject to any out of bounds errors. Because you are pre-generating the list of all valid coordinates in your image, generating the mask will confine you within the boundaries of the image so you never have to check for out of boundaries conditions.... even when you specify coordinates to search that are out of bounds.
Specifically, assuming your image is stored in img, you would do:
count = 0; % Remembers total count of pixels matching a value
val = 0; % Value to match
N = 50; % Radius of neighbourhood
% Generate 2D grid of coordinates
[x, y] = meshgrid(1 : size(img, 2), 1 : size(img, 1));
% For each coordinate to check...
for kk = 1 : size(coord, 1)
a = coord(kk, 1); b = coord(kk, 2); % Get the pixel locations
mask = (x - a).^2 + (y - b).^2 <= N*N; % Get a mask of valid locations
% within the neighbourhood
pix = img(mask); % Get the valid pixels
count = count + any(pix(:) == val); % Add either 0 or 1 depending if
% we have found any matching pixels
end
The proposed solution:
fc = repmat(-5:5,11,1);
I = (fc.^2+fc'.^2)<=25;
fc_x = fc(I);
fc_y = fc'; fc_y = fc_y(I);
for i = 1:length(x_coord)
x_toCheck = fc_x + x_coord(i);
y_toCheck = fc_y + y_coord(i);
I = x_toCheck>0 & x_toCheck<=yourImageWidth;
I = I.*(y_toCheck>0 & y_toCheck<=yourImageHeight);
x_toCheck = x_toCheck(logical(I));
y_toCheck = y_toCheck(logical(I));
count = sum(img(x_toCheck(:),y_toCheck(:)) == 0);
end
If your img function can only check one pixel at a time, just add a for loop:
for i = 1:length(x_coord)
x_toCheck = fc_x + x_coord(i);
y_toCheck = fc_y + y_coord(i);
I = x_toCheck>0 & x_toCheck<=yourImageWidth;
I = I.*(y_toCheck>0 & y_toCheck<=yourImageHeight);
x_toCheck = x_toCheck(logical(I));
y_toCheck = y_toCheck(logical(I));
for j = 1:length(x_toCheck)
count = count + (img(x_toCheck(j),y_toCheck(j)) == 0);
end
end
Step-by-step:
You first need to get all the coordinates within 5 pixels range of the given coordinate.
We start by building a square of 11 pixels in length/width.
fc = repmat(-5:5,11,1);
fc_x = fc;
fc_y = fc';
plot(fc_x,fc_y,'.');
We now need to build a filter to get rid of those points outside the 5-pixel radius.
I = (fc.^2+fc'.^2)<=25;
Apply the filter, so we can get a circle of 5-pixel radius.
fc_x = fc_x(I);
fc_y = fc_y(I);
Next translate the centre of the circle to the given coordinate:
x_toCheck = fc_x + x_coord(i);
y_toCheck = fc_y + y_coord(i);
You need to check whether part of the circle is outside the range of your image:
I = x_toCheck>0 & x_toCheck<=yourImageWidth;
I = I.*(y_toCheck>0 & y_toCheck<=yourImageHeight);
x_toCheck = x_toCheck(logical(I));
y_toCheck = y_toCheck(logical(I));
Finally count the pixels:
count = sum(img(x_toCheck,y_toCheck) == 0);

Getting dimension of specific segment window

I'm working on comparing the center of the blob with the 20% small box positioned at the center of the blob's bounding box.
I implemented this code first, to find the blob center points:
For y = 0 To bmp.ScaleHeight - 1
For x = 0 To bmp.ScaleWidth - 1
If bmp.Point(x, y) = vbWhite
Then
Xs = Xs + x
Ys = Ys + y
area = area + 1
endIF
Next x
Next y
YCenteroid = Ys / area
XCentroid = Xs / area
Then, the width and the height of the blob is calculated as below:
BlobHeight = MaxY - MinY
BlobWidth = MaxX - MinX
How to get that small box dimensions for comparing it with the center points?
Thanks
Coordinates of small box centered about (XCenteroid, YCenteroid) with width = 20% of blob width
RectLeft = XCentroid - 0.1 * BlobWidth
RectRight = XCentroid + 0.1 * BlobWidth
RectTop = YCentroid - 0.1 * BlobHeight
RectBottom = YCentroid + 0.1 * BlobHeight

Cubic to equirectangular projection algorithm

I have a cube map texture which defines a surrounding, however I need to pass it to a program which only works with latitude/longitude maps. I am really at lost here on how to do the translation. Any help here?
In other words, I need to come from here:
To this (I think that image has an aditional -90° rotation over the x axis):
update: I got the official names of the projections. By the way, I found the opposite projection here
A general procedure for projecting raster images like this is:
for each pixel of the destination image:
calculate the corresponding unit vector in 3-dimensional space
calculate the x,y coordinate for that vector in the source image
sample the source image at that coordinate and assign the value to the destination pixel
The last step is simply interpolation. We will focus on the other two steps.
The unit vector for a given latitude and longitude is (+z towards the north pole, +x towards the prime meridian):
x = cos(lat)*cos(lon)
y = cos(lat)*sin(lon)
z = sin(lat)
Assume the cube is +/- 1 unit around the origin (i.e. 2x2x2 overall size).
Once we have the unit vector, we can find the face of the cube it's on by looking at the element with the largest absolute value. For example, if our unit vector was <0.2099, -0.7289, 0.6516>, then the y element has the largest absolute value. It's negative, so the point will be found on the -y face of the cube. Normalize the other two coordinates by dividing by the y magnitude to get the location within that face. So, the point will be at x=0.2879, z=0.8939 on the -y face.
I'd like to share my MATLAB implementation of this conversion. I also borrowed from the OpenGL 4.1 specification, Chapter 3.8.10 (found here), as well as Paul Bourke's website (found here). Make sure you look under the subheading: Converting to and from 6 cubic environment maps and a spherical map.
I also used Sambatyon's post above as inspiration. It started off as a port from Python over to MATLAB, but I made the code so that it is completely vectorized (i.e. no for loops). I also take the cubic image and split it up into 6 separate images, as the application I'm building has the cubic image in this format. Also there is no error checking with the code, and that this assumes that all of the cubic images are of the same size (n x n). This also assumes that the images are in RGB format. If you'd like to do this for a monochromatic image, simply comment out those lines of code that require access to more than one channel. Here we go!
function [out] = cubic2equi(top, bottom, left, right, front, back)
% Height and width of equirectangular image
height = size(top, 1);
width = 2*height;
% Flags to denote what side of the cube we are facing
% Z-axis is coming out towards you
% X-axis is going out to the right
% Y-axis is going upwards
% Assuming that the front of the cube is towards the
% negative X-axis
FACE_Z_POS = 1; % Left
FACE_Z_NEG = 2; % Right
FACE_Y_POS = 3; % Top
FACE_Y_NEG = 4; % Bottom
FACE_X_NEG = 5; % Front
FACE_X_POS = 6; % Back
% Place in a cell array
stackedImages{FACE_Z_POS} = left;
stackedImages{FACE_Z_NEG} = right;
stackedImages{FACE_Y_POS} = top;
stackedImages{FACE_Y_NEG} = bottom;
stackedImages{FACE_X_NEG} = front;
stackedImages{FACE_X_POS} = back;
% Place in 3 3D matrices - Each matrix corresponds to a colour channel
imagesRed = uint8(zeros(height, height, 6));
imagesGreen = uint8(zeros(height, height, 6));
imagesBlue = uint8(zeros(height, height, 6));
% Place each channel into their corresponding matrices
for i = 1 : 6
im = stackedImages{i};
imagesRed(:,:,i) = im(:,:,1);
imagesGreen(:,:,i) = im(:,:,2);
imagesBlue(:,:,i) = im(:,:,3);
end
% For each co-ordinate in the normalized image...
[X, Y] = meshgrid(1:width, 1:height);
% Obtain the spherical co-ordinates
Y = 2*Y/height - 1;
X = 2*X/width - 1;
sphereTheta = X*pi;
spherePhi = (pi/2)*Y;
texX = cos(spherePhi).*cos(sphereTheta);
texY = sin(spherePhi);
texZ = cos(spherePhi).*sin(sphereTheta);
% Figure out which face we are facing for each co-ordinate
% First figure out the greatest absolute magnitude for each point
comp = cat(3, texX, texY, texZ);
[~,ind] = max(abs(comp), [], 3);
maxVal = zeros(size(ind));
% Copy those values - signs and all
maxVal(ind == 1) = texX(ind == 1);
maxVal(ind == 2) = texY(ind == 2);
maxVal(ind == 3) = texZ(ind == 3);
% Set each location in our equirectangular image, figure out which
% side we are facing
getFace = -1*ones(size(maxVal));
% Back
ind = abs(maxVal - texX) < 0.00001 & texX < 0;
getFace(ind) = FACE_X_POS;
% Front
ind = abs(maxVal - texX) < 0.00001 & texX >= 0;
getFace(ind) = FACE_X_NEG;
% Top
ind = abs(maxVal - texY) < 0.00001 & texY < 0;
getFace(ind) = FACE_Y_POS;
% Bottom
ind = abs(maxVal - texY) < 0.00001 & texY >= 0;
getFace(ind) = FACE_Y_NEG;
% Left
ind = abs(maxVal - texZ) < 0.00001 & texZ < 0;
getFace(ind) = FACE_Z_POS;
% Right
ind = abs(maxVal - texZ) < 0.00001 & texZ >= 0;
getFace(ind) = FACE_Z_NEG;
% Determine the co-ordinates along which image to sample
% based on which side we are facing
rawX = -1*ones(size(maxVal));
rawY = rawX;
rawZ = rawX;
% Back
ind = getFace == FACE_X_POS;
rawX(ind) = -texZ(ind);
rawY(ind) = texY(ind);
rawZ(ind) = texX(ind);
% Front
ind = getFace == FACE_X_NEG;
rawX(ind) = texZ(ind);
rawY(ind) = texY(ind);
rawZ(ind) = texX(ind);
% Top
ind = getFace == FACE_Y_POS;
rawX(ind) = texZ(ind);
rawY(ind) = texX(ind);
rawZ(ind) = texY(ind);
% Bottom
ind = getFace == FACE_Y_NEG;
rawX(ind) = texZ(ind);
rawY(ind) = -texX(ind);
rawZ(ind) = texY(ind);
% Left
ind = getFace == FACE_Z_POS;
rawX(ind) = texX(ind);
rawY(ind) = texY(ind);
rawZ(ind) = texZ(ind);
% Right
ind = getFace == FACE_Z_NEG;
rawX(ind) = -texX(ind);
rawY(ind) = texY(ind);
rawZ(ind) = texZ(ind);
% Concatenate all for later
rawCoords = cat(3, rawX, rawY, rawZ);
% Finally determine co-ordinates (normalized)
cubeCoordsX = ((rawCoords(:,:,1) ./ abs(rawCoords(:,:,3))) + 1) / 2;
cubeCoordsY = ((rawCoords(:,:,2) ./ abs(rawCoords(:,:,3))) + 1) / 2;
cubeCoords = cat(3, cubeCoordsX, cubeCoordsY);
% Now obtain where we need to sample the image
normalizedX = round(cubeCoords(:,:,1) * height);
normalizedY = round(cubeCoords(:,:,2) * height);
% Just in case.... cap between [1, height] to ensure
% no out of bounds behaviour
normalizedX(normalizedX < 1) = 1;
normalizedX(normalizedX > height) = height;
normalizedY(normalizedY < 1) = 1;
normalizedY(normalizedY > height) = height;
% Place into a stacked matrix
normalizedCoords = cat(3, normalizedX, normalizedY);
% Output image allocation
out = uint8(zeros([size(maxVal) 3]));
% Obtain column-major indices on where to sample from the
% input images
% getFace will contain which image we need to sample from
% based on the co-ordinates within the equirectangular image
ind = sub2ind([height height 6], normalizedCoords(:,:,2), ...
normalizedCoords(:,:,1), getFace);
% Do this for each channel
out(:,:,1) = imagesRed(ind);
out(:,:,2) = imagesGreen(ind);
out(:,:,3) = imagesBlue(ind);
I've also made the code publicly available through github and you can go here for it. Included is the main conversion script, a test script to show its use and a sample set of 6 cubic images pulled from Paul Bourke's website. I hope this is useful!
Project changed name to libcube2cyl. Same goodness, better working examples both in C and C++.
Now also available in C.
I happened to solve the exact same problem as you described.
I wrote this tiny C++ lib called "Cube2Cyl", you can find the detailed explanation of algorithm here: Cube2Cyl
Please find the source code from github: Cube2Cyl
It is released under MIT licence, use it for free!
So, I found a solution mixing this article on spherical coordinates from wikipedia and the Section 3.8.10 from the OpenGL 4.1 specification (plus some hacks to make it work). So, assuming that the cubic image has a height h_o and width w_o, the equirectangular will have a height h = w_o / 3 and a width w = 2 * h. Now for each pixel (x, y) 0 <= x <= w, 0 <= y <= h in the equirectangular projection, we want to find the corresponding pixel in the cubic projection, I solve it using the following code in python (I hope I didn't make mistakes while translating it from C)
import math
# from wikipedia
def spherical_coordinates(x, y):
return (math.pi*((y/h) - 0.5), 2*math.pi*x/(2*h), 1.0)
# from wikipedia
def texture_coordinates(theta, phi, rho):
return (rho * math.sin(theta) * math.cos(phi),
rho * math.sin(theta) * math.sin(phi),
rho * math.cos(theta))
FACE_X_POS = 0
FACE_X_NEG = 1
FACE_Y_POS = 2
FACE_Y_NEG = 3
FACE_Z_POS = 4
FACE_Z_NEG = 5
# from opengl specification
def get_face(x, y, z):
largest_magnitude = max(x, y, z)
if largest_magnitude - abs(x) < 0.00001:
return FACE_X_POS if x < 0 else FACE_X_NEG
elif largest_magnitude - abs(y) < 0.00001:
return FACE_Y_POS if y < 0 else FACE_Y_NEG
elif largest_magnitude - abs(z) < 0.00001:
return FACE_Z_POS if z < 0 else FACE_Z_NEG
# from opengl specification
def raw_face_coordinates(face, x, y, z):
if face == FACE_X_POS:
return (-z, -y, x)
elif face == FACE_X_NEG:
return (-z, y, -x)
elif face == FACE_Y_POS:
return (-x, -z, -y)
elif face == FACE_Y_NEG:
return (-x, z, -y)
elif face == FACE_Z_POS:
return (-x, y, -z)
elif face == FACE_Z_NEG:
return (-x, -y, z)
# computes the topmost leftmost coordinate of the face in the cube map
def face_origin_coordinates(face):
if face == FACE_X_POS:
return (2*h, h)
elif face == FACE_X_NEG:
return (0, 2*h)
elif face == FACE_Y_POS:
return (h, h)
elif face == FACE_Y_NEG:
return (h, 3*h)
elif face == FACE_Z_POS:
return (h, 0)
elif face == FACE_Z_NEG:
return (h, 2*h)
# from opengl specification
def raw_coordinates(xc, yc, ma):
return ((xc/abs(ma) + 1) / 2, (yc/abs(ma) + 1) / 2)
def normalized_coordinates(face, x, y):
face_coords = face_origin_coordinates(face)
normalized_x = int(math.floor(x * h + 0.5))
normalized_y = int(math.floor(y * h + 0.5))
# eliminates black pixels
if normalized_x == h:
--normalized_x
if normalized_y == h:
--normalized_y
return (face_coords[0] + normalized_x, face_coords[1] + normalized_y)
def find_corresponding_pixel(x, y):
spherical = spherical_coordinates(x, y)
texture_coords = texture_coordinates(spherical[0], spherical[1], spherical[2])
face = get_face(texture_coords[0], texture_coords[1], texture_coords[2])
raw_face_coords = raw_face_coordinates(face, texture_coords[0], texture_coords[1], texture_coords[2])
cube_coords = raw_coordinates(raw_face_coords[0], raw_face_coords[1], raw_face_coords[2])
# this fixes some faces being rotated 90°
if face in [FACE_X_NEG, FACE_X_POS]:
cube_coords = (cube_coords[1], cube_coords[0])
return normalized_coordinates(face, cube_coords[0], cube_coords[1])
at the end we just call find_corresponding_pixel for each pixel in the equirectangular projection
I think from your algorithm in Python you might have inverted x and y in the calculation of theta and phi.
def spherical_coordinates(x, y):
return (math.pi*((y/h) - 0.5), 2*math.pi*x/(2*h), 1.0)
from Paul Bourke's website here
theta = x pi
phi = y pi / 2
and in your code you are using y in the theta calculation and x in the phi calculation.
Correct me if I am wrong.

Algorithm to subdivide a polygon in smaller polygons

I have a polygon made of successive edges on a plane, and would like to subdivide it in sub-polygons being triangles or rectangles.
Where can I find an algorithm to do this ?
Thanks !
In computational geometry, the problem you want to solve is called triangulation.
There are algorithms to solve this problem, giving triangulations with different properties. You will need to decide which one is the best fit.
I was looking for an answer for this myself but couldn't find one. Tried to stitch together several pieces and here's the result.
This is not necessarily the most optimal routine but it did the job for me. If you want to increase performance, try experimenting with the code.
A brief description of the algorithm:
Using the boundaries of the original geometry itself, and the boundaries of its convex hull, and its minimum rotated rectangle, derive all possible rectangles.
Divide all rectangles into smaller squares of specified side length.
Drop duplicates using a rounded off centroid. (r: round off param)
Retain either those squares 'within' the geometry, or those that 'intersect' the geometry, depending on whichever is closer to the total number of required squares.
EDITED
New Solution
#### Python script for dividing any shapely polygon into smaller equal sized polygons
import numpy as np
from shapely.ops import split
import geopandas
from shapely.geometry import MultiPolygon, Polygon
def rhombus(square):
"""
Naively transform the square into a Rhombus at a 45 degree angle
"""
coords = square.boundary.coords.xy
xx = list(coords[0])
yy = list(coords[1])
radians = 1
points = list(zip(xx, yy))
Rhombus = Polygon(
[
points[0],
points[1],
points[3],
((2 * points[3][0]) - points[2][0], (2 * points[3][1]) - points[2][1]),
points[4],
]
)
return Rhombus
def get_squares_from_rect(RectangularPolygon, side_length=0.0025):
"""
Divide a Rectangle (Shapely Polygon) into squares of equal area.
`side_length` : required side of square
"""
rect_coords = np.array(RectangularPolygon.boundary.coords.xy)
y_list = rect_coords[1]
x_list = rect_coords[0]
y1 = min(y_list)
y2 = max(y_list)
x1 = min(x_list)
x2 = max(x_list)
width = x2 - x1
height = y2 - y1
xcells = int(np.round(width / side_length))
ycells = int(np.round(height / side_length))
yindices = np.linspace(y1, y2, ycells + 1)
xindices = np.linspace(x1, x2, xcells + 1)
horizontal_splitters = [
LineString([(x, yindices[0]), (x, yindices[-1])]) for x in xindices
]
vertical_splitters = [
LineString([(xindices[0], y), (xindices[-1], y)]) for y in yindices
]
result = RectangularPolygon
for splitter in vertical_splitters:
result = MultiPolygon(split(result, splitter))
for splitter in horizontal_splitters:
result = MultiPolygon(split(result, splitter))
square_polygons = list(result)
return square_polygons
def split_polygon(G, side_length=0.025, shape="square", thresh=0.9):
"""
Using a rectangular envelope around `G`, creates a mesh of squares of required length.
Removes non-intersecting polygons.
Args:
- `thresh` : Range - [0,1]
This controls - the number of smaller polygons at the boundaries.
A thresh == 1 will only create (or retain) smaller polygons that are
completely enclosed (area of intersection=area of smaller polygon)
by the original Geometry - `G`.
A thresh == 0 will create (or retain) smaller polygons that
have a non-zero intersection (area of intersection>0) with the
original geometry - `G`
- `side_length` : Range - (0,infinity)
side_length must be such that the resultant geometries are smaller
than the original geometry - `G`, for a useful result.
side_length should be >0 (non-zero positive)
- `shape` : {square/rhombus}
Desired shape of subset geometries.
"""
assert side_length>0, "side_length must be a float>0"
Rectangle = G.envelope
squares = get_squares_from_rect(Rectangle, side_length=side_length)
SquareGeoDF = geopandas.GeoDataFrame(squares).rename(columns={0: "geometry"})
Geoms = SquareGeoDF[SquareGeoDF.intersects(G)].geometry.values
if shape == "rhombus":
Geoms = [rhombus(g) for g in Geoms]
geoms = [g for g in Geoms if ((g.intersection(G)).area / g.area) >= thresh]
elif shape == "square":
geoms = [g for g in Geoms if ((g.intersection(G)).area / g.area) >= thresh]
return geoms
# Reading geometric data
geo_filepath = "/data/geojson/pc_14.geojson"
GeoDF = geopandas.read_file(geo_filepath)
# Selecting random shapely-geometry
G = np.random.choice(GeoDF.geometry.values)
squares = split_polygon(G,shape='square',thresh=0.5,side_length=0.025)
rhombuses = split_polygon(G,shape='rhombus',thresh=0.5,side_length=0.025)
Previous Solution:
import numpy as np
import geopandas
from shapely.ops import split
from shapely.geometry import MultiPolygon, Polygon, Point, MultiPoint
def get_rect_from_geom(G, r=2):
"""
Get rectangles from a geometry.
r = rounding factor.
small r ==> more rounding off ==> more rectangles
"""
coordinate_arrays = G.exterior.coords.xy
coordinates = list(
zip(
[np.round(c, r) for c in coordinate_arrays[0]],
[np.round(c, r) for c in coordinate_arrays[1]],
)
)
Rectangles = []
for c1 in coordinates:
Coords1 = [a for a in coordinates if a != c1]
for c2 in Coords1:
Coords2 = [b for b in Coords1 if b != c2]
x1, y1 = c1[0], c1[1]
x2, y2 = c2[0], c2[1]
K1 = [k for k in Coords2 if k == (x1, y2)]
K2 = [k for k in Coords2 if k == (x2, y1)]
if (len(K1) > 0) & (len(K2) > 0):
rect = [list(c1), list(K1[0]), list(c2), list(K2[0])]
Rectangles.append(rect)
return Rectangles
def get_squares_from_rect(rect, side_length=0.0025):
"""
Divide a rectangle into equal area squares
side_length = required side of square
"""
y_list = [r[1] for r in rect]
x_list = [r[0] for r in rect]
y1 = min(y_list)
y2 = max(y_list)
x1 = min(x_list)
x2 = max(x_list)
width = x2 - x1
height = y2 - y1
xcells, ycells = int(np.round(width / side_length)), int(
np.round(height / side_length)
)
yindices = np.linspace(y1, y2, ycells + 1)
xindices = np.linspace(x1, x2, xcells + 1)
horizontal_splitters = [
LineString([(x, yindices[0]), (x, yindices[-1])]) for x in xindices
]
vertical_splitters = [
LineString([(xindices[0], y), (xindices[-1], y)]) for y in yindices
]
result = Polygon(rect)
for splitter in vertical_splitters:
result = MultiPolygon(split(result, splitter))
for splitter in horizontal_splitters:
result = MultiPolygon(split(result, splitter))
square_polygons = list(result)
return [np.stack(SQPOLY.exterior.coords.xy, axis=1) for SQPOLY in square_polygons]
def round_centroid(g, r=10):
"""
Get Centroids.
Round off centroid coordinates to `r` decimal points.
"""
C = g.centroid.coords.xy
return (np.round(C[0][0], r), np.round(C[1][0], r))
def subdivide_polygon(g, side_length=0.0025, r=10):
"""
1. Create all possible rectangles coordinates from the geometry, its minimum rotated rectangle, and its convex hull.
2. Divide all rectangles into smaller squares.
small r ==> more rounding off ==> fewer overlapping squares. (these are dropped as duplicates)
large r ==> may lead to a few overlapping squares.
"""
# Number of squares required.
num_squares_reqd = g.area // (side_length ** 2)
# Some of these combinations can be dropped to improve performance.
Rectangles = []
Rectangles.extend(get_rect_from_geom(g))
Rectangles.extend(get_rect_from_geom(g.minimum_rotated_rectangle))
Rectangles.extend(get_rect_from_geom(g.convex_hull))
Squares = []
for r in range(len(Rectangles)):
rect = Rectangles[r]
Squares.extend(get_squares_from_rect(rect, side_length=side_length))
SquarePolygons = [Polygon(square) for square in Squares]
GDF = geopandas.GeoDataFrame(SquarePolygons).rename(columns={0: "geometry"})
GDF.loc[:, "centroid"] = GDF.geometry.apply(round_centroid, r=r)
GDF = GDF.drop_duplicates(subset=["centroid"])
wgeoms = GDF[GDF.within(g)].geometry.values
igeoms = GDF[GDF.intersects(g)].geometry.values
w = abs(num_squares_reqd - len(wgeoms))
i = abs(num_squares_reqd - len(igeoms))
print(w, i)
if w <= i:
return wgeoms
else:
return igeoms
geoms = subdivide(g)
Stumbled across this after many searches.
Thanks #Aditya Chhabra for your submission, it works great but get_squares_from_rect is very slow for small side lengths due to iterative clips.
We can do this instantaneously if we combine all LineStrings into a single collection, then clip and polygonize in one step, which I found in in this question.
Previously side lengths of 0.0001 (EPSG:4326) took > 1 minute, now it takes no time.
from shapely.ops import unary_union, polygonize, linemerge
from shapely.geometry import LineString
import numpy as np
def get_squares_from_rect_faster(RectangularPolygon, side_length=0.0025):
rect_coords = np.array(RectangularPolygon.boundary.coords.xy)
y_list = rect_coords[1]
x_list = rect_coords[0]
y1 = min(y_list)
y2 = max(y_list)
x1 = min(x_list)
x2 = max(x_list)
width = x2 - x1
height = y2 - y1
xcells = int(np.round(width / side_length))
ycells = int(np.round(height / side_length))
yindices = np.linspace(y1, y2, ycells + 1)
xindices = np.linspace(x1, x2, xcells + 1)
horizontal_splitters = [
LineString([(x, yindices[0]), (x, yindices[-1])]) for x in xindices
]
vertical_splitters = [
LineString([(xindices[0], y), (xindices[-1], y)]) for y in yindices
]
lines = horizontal_splitters + vertical_splitters
lines.append(RectangularPolygon.boundary)
lines = unary_union(lines)
lines = linemerge(lines)
return list(polygonize(lines))

Resources