Think of a 2D grid, e.g. in the size of 1000x1000 cells, which is used as the map of a level in a game. This map is dynamically filled with game objects during runtime. Now we need to calculate the probability of placing a new object into the a given x/y position in this grid.
What I have already is an int array the holds the number of game objects in close distance to the cell at x/y. The index of this array represents the cell distance to the given cell, and each value in the array tells the number of game objects in the grid at that distance. So for example the array could look like this:
0, 0, 1, 2, 0, 3, 1, 0, 4, 0, 1
This would mean that 0 objects are in the grid cell at x/y itself, 0 objects are in the direct neighbour cells, 1 object is in a cell with a distance of two cells, 2 objects are in the cells of a distance of three cells, and so on. The following figure illustrates this example:
The task now is to calculate how likely it is to place a new object at x/y, based on the values in this array. The algorithm should be something like this:
if at least one object is already closer than min, then the probability must be 0.0
else if no object is within a distance of max, then the probability must be 1.0
else the probability depends on how many objects are close to x/y, and how many.
So in other words: if there is at least one game object already very close, we don't want a new one. On the other hand if there is no object within a max radius, we want a new object in any case. Or else we want to place a new object with a probability depending on how many other objects there are close to x/y -- the more objects are close, and the closer they are, the less likely we want to place a new object.
I hope my description was understandable.
Can you think of an elegent algorithm or formula to calculate this probability?
PS: Sorry for the title of this question, I don't know how to summarize my question better.
One approach I'd consider is to compute a "population density" for that square. The lower the population density, the higher the probability that you would place an item there.
As you say, if there is an item at (x,y), then you can't place an item there. So consider that a population density of 1.0.
At the next level out there are 8 possible neighbors. The population density for that level would be n/8, where n is the number of items at that level. So if there are 3 objects that are adjacent to (x,y), then the density of that level is 3/8. Divide that by (distance+1).
Do the same for all levels. That is, compute the density of each level, divide by (distance+1), and sum the results. The divisor at each level is (distance*8). So your divisors are 8, 16, 24, etc.
Once you compute the results, you'll probably want to play with the numbers a bit to adjust the probabilities. That is, if you come up with a sum of 0.5, that space is likely pretty crowded. You wouldn't want to use (1-density) as your probability for generating an item. But the method I outline above should give you a single number to play with, which should simplify the problem.
So the algorithm looks something like:
total_density = 0;
for i = 0; i < max; ++i
if (i == 0)
local_density = counts[i]
else
local_density = counts[i]/(i*8); // density at that level
total_density = total_density + (local_density/(i+1))
If dividing the local density by (i+1) over-exaggerates the effect of distance, consider using something like log(i+1) or sqrt(i+1). I've found that to be useful in other situations where the distance is a factor, but not linearly.
lets assume your array's name is distances.
double getProbability()
{
for(int i=0 ; i<min ; i++)
{
if(distances[i]!=0) return 0;
}
int s = 0;
bool b = true;
for(int i=min ; i<max ; i++)
{
b = b && (distances[i]==0)
s+= distances[i]/(i+1);
}
if(b) return 1;
for(int i=0 ; i<distances.Count() ; i++)
{
s+= distances[i]/(i+1);
}
else return (float)s/totalObjectNum;
}
This approach calculates a weighted sum of those objects in a distance > min and <= max.
Parallel an upper limit is calculated (called normWeight) which depends only on max.
If at least one object is in a distance > min and <= max then
the probability closest to 1 would be 1-(1/normWeight) for 1 object on the outer ring.
The minimal probability would be 1-((normWeight-1)/normWeight). E.g. for
max-1 objects on the outer ring.
The calculation of the weighted sum can be modified by calculating different values for the variable delta.
float calculateProbabilty()
{
vector<int> numObjects; // [i] := number of objects in a distance i
fill numObjects ....
// given:
int min = ...;
int max = ...; // must be >= min
bool anyObjectCloserThanMin = false;
bool anyObjectCloserThanMax = false;
// calculate a weighted sum
float sumOfWeights = 0.0;
float normWeight = 0.0;
for (int distance=0; distance <= max; distance++)
{
// calculate a delta-value for increasing sumOfWeights depending on distance
// the closer the object the higher the delta
// e.g.:
float delta = (float)(max + 1 - distance);
normWeight += delta;
if (numObjects[distance] > 0 && distance < min)
{
anyObjectCloserThanMin = true;
break;
}
if (numObjects[distance] > 0)
{
anyObjectCloserThanMax = true;
sumOfWeights += (float)numObjects[distance] * delta;
}
}
float probability = 0.0;
if (anyObjectCloserThanMin)
{
// if at least one object is already closer than min, then the probability must be 0.0
probability = 0.0;
}
else if (!anyObjectCloserThanMax)
{
// if no object is within a distance of max, then the probability must be 1.0
probability = 1.0;
}
else
{
// else the probability depends on how many objects are close to x/y
// in this scenario normWeight defines an upper limited beyond that
// the probability becomes 0
if (sumOfWeights >= normWeight)
{
probability = 0.0;
}
else
{
probability = 1. - (sumOfWeights / normWeight);
// The probability closest to 1 would be 1-(1/normWeight) for 1 object on the outer ring.
// The minimal probability would be 1-((normWeight-1)/normWeight). E.g. for
// max-1 objects on the outer ring.
}
}
return probability;
}
A simple approach could be:
1 / (sum over the number of all neighbours in [min, max] weighted by their distance to x/y + 1).
By weighted I mean that the number of those neighbours whose distance to x/y is smaller is multiplied by a bigger factor that the number of those, that are not so close. As weight you could for example take (max+1)-distance.
Note that once you compute the object density (see "population density" or "weighted sum of those objects in a distance" in previous answers), you still need to transform this value to a probability in which to insert new objects (which is not treated so comprehensivelly in other answers).
The probability function (PDF) needs to be defined for all possible values of object density, i.e. on closed interval [0, 1], but otherwise it can be shaped towards any goal you desire (see illustrations), e.g.:
to move the current object density towards a desired maximum object density
to keep overall probability of insertion constant, while taking the local object density into account
If you want to experiment with various goals (PDF function shapes - linear, quadratic, hyperbole, circle section, ...), you might wish to have a look at factory method pattern so you can switch between implementations while calling the same method name, but I wanted to keep things simpler in my example, so I implemented only the 1st goal (in python):
def object_density(objects, min, max):
# choose your favourite algorithm, e.g.:
# Compute the density for each level of distance
# and then averages the levels, i.e. distance 2 object is
# exactly 8 times less significant from distance 1 object.
# Returns float between 0 and 1 (inclusive) for valid inputs.
levels = [objects[d] / (d * 8) for d in range(min, max + 1)]
return sum(levels) / len(levels)
def probability_from_density(desired_max_density, density):
# play with PDF functions, e.g.
# a simple linear function
# f(x) = a*x + b
# where we know 2 points [0, 1] and [desired_max_density, 0], so:
# 1 = 0 + b
# 0 = a*desired_max_density + b
# Returns float betwen 0 and 1 (inclusive) for valid inputs.
if density >= desired_max_density:
return 0.0
a = -1 / desired_max_density
b = 1
return a * density + b
def main():
# distance 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
objects = [0, 0, 1, 2, 0, 3, 1, 0, 4, 0, 1]
min = 2
max = 5
desired_max_density = 0.1
if sum(objects[:min]): # when an object is below min distance
return 0.0
density = object_density(objects, min, max) # 0,0552
probability = probability_from_density(desired_max_density, density) # 0,4479
return probability
print(main())
Related
I have a set of red lines from which I get a set of green intersection points (visible on the screen):
Then I want to find the four points that most likely describe the rectangle (if there are several options, then choose the largest area). I read similar questions about how to find points that EXACTLY form a rectangle:
find if 4 points on a plane form a rectangle?
https://softwareengineering.stackexchange.com/questions/176938/how-to-check-if-4-points-form-a-square
There is an option to iterate over all four points and calculate the probability that they form a rectangle (or some coefficient of similarity to a rectangle). Suppose at the moment we are considering four points A, B, C, D. I tried 2 similarity functions:
,
where <> denotes dot product, and || - vector norm.
,
where std is the standard deviation of the distances from the vertices to the center of mass of the assumed rectangle, and mean is the average distance.
Both functions did not perform well.
Is there a way to introduce a function that is close to 1 when the four points of the plane are close to the vertices of the rectangle and equal to 0 when they are at the position farthest from the rectangle (assuming they are on 1 line)?
I can't really speak to finding an appropriate cost function for scoring what a "good" rectangle is. From the comments it looks like there's a lot of discussion, but no consensus. So for now I'm going to just use a scoring function that penalizes four-point shapes for having angles that are further away from 90 degrees. Specifically, I'm summing the squared distance. If you want to have a different scoring metric you can replace the calculation in the scoreFunc function.
I set up an interactive window where you can click to add points. When you press 'q' it'll take those points, find all possible combinations (not permutations) of 4 points, and then run the scoring function on each and draws the best.
I'm using a recursive, brute-force search. To avoid having a ton of duplicates I came up with a hashing function that works regardless of order. I used prime numbers to ID each point and the hashing function just takes the product of the ID's of the points. This ensures that (1,3,5,7) is the same as (3,1,7,5). I used primes because the product of primes is unique in this situation (they can't be factorized and clumped because they're primes).
After the search I have to make sure that the points are ordered in such a way that the lines aren't intersecting. I'm taking advantage of OpenCV's contourArea to do that calculation for me. I can swap the first point with it's horizontal and vertical neighbor and compare the areas to the original. "Bowtie" shapes from intersecting lines will have less area (I'm pretty sure they actually get zero area because they don't count as closed shapes) than a non-intersection shape.
import cv2
import numpy as np
import math
# get mouse click
click_pos = None;
click = False;
def mouseClick(event, x, y, flags, param):
# hook to globals
global click_pos;
global click;
# check for left mouseclick
if event == cv2.EVENT_LBUTTONDOWN:
click = True;
click_pos = (x,y);
# prime hash function
def phash(points):
total = 1;
for point in points:
total *= point[0];
return total;
# checks if an id is already present in list
def isInList(point, curr_list):
pid = point[0];
for item in curr_list:
if item[0] == pid:
return True;
return False;
# look for rectangles
def getAllRects(points, curr_list, rects, curr_point):
# check if already in curr_list
if isInList(curr_point, curr_list):
return curr_list;
# add self to list
curr_list.append(curr_point);
# check end condition
if len(curr_list) == 4:
# add to dictionary (no worry for duplicates)
rects[phash(curr_list)] = curr_list[:];
curr_list = curr_list[:-1];
return curr_list;
# continue search
for point in points:
curr_list = getAllRects(points, curr_list, rects, point);
curr_list = curr_list[:-1];
return curr_list;
# checks if a number is prime
def isPrime(num):
bound = int(math.sqrt(num));
curr = 3;
while curr <= bound:
if num % curr == 0:
return False;
# skip evens
curr += 2;
return True;
# generate prime number id's for each point
def genPrimes(num):
primes = [];
curr = 1;
while len(primes) < num:
if isPrime(curr):
primes.append(curr);
# +2 to skip evens
curr += 2;
return primes;
# swap sides (fix intersecting lines issue)
def swapH(box):
new_box = np.copy(box);
new_box[0] = box[1];
new_box[1] = box[0];
return new_box;
def swapV(box):
new_box = np.copy(box);
new_box[0] = box[3];
new_box[3] = box[0];
return new_box;
# removes intersections
def noNoodles(box):
# get three variants
hbox = swapH(box);
vbox = swapV(box);
# get areas and choose max
sortable = [];
sortable.append([cv2.contourArea(box), box]);
sortable.append([cv2.contourArea(hbox), hbox]);
sortable.append([cv2.contourArea(vbox), vbox]);
sortable.sort(key = lambda a : a[0]);
return sortable[-1][1];
# 2d distance
def dist2D(one, two):
dx = one[0] - two[0];
dy = one[1] - two[1];
return math.sqrt(dx*dx + dy*dy);
# angle between three points (the last point is the middle)
# law of cosines
def angle3P(p1, p2, p3):
# get distances
a = dist2D(p3, p1);
b = dist2D(p3, p2);
c = dist2D(p1, p2);
# calculate angle // assume a and b are nonzero
numer = c**2 - a**2 - b**2;
denom = -2 * a * b;
if denom == 0:
denom = 0.000001;
rads = math.acos(numer / denom);
degs = math.degrees(rads);
return degs;
# calculates a score
def scoreFunc(box):
# for each point, calculate angle
angles = [];
for a in range(len(box)):
prev = box[a-2][0];
curr = box[a-1][0];
next = box[a][0];
angles.append(angle3P(prev, next, curr));
# for each angle, score on squared distance from 90
score = 0;
for angle in angles:
score += (angle - 90)**2;
return score;
# evaluates each box (assigns a score)
def evaluate(boxes):
sortable = [];
for box in boxes:
# INSERT YOUR OWN SCORING FUNC HERE
sortable.append([scoreFunc(box), box]);
sortable.sort(key = lambda a : a[0]);
return sortable;
# set up callback
cv2.namedWindow("Display");
cv2.setMouseCallback("Display", mouseClick);
# set up screen
res = (600,600,3);
bg = np.zeros(res, np.uint8);
# loop
done = False;
points = [];
while not done:
# reset display
display = np.copy(bg);
# check for new click
if click:
click = False;
points.append(click_pos);
# draw points
for point in points:
cv2.circle(display, point, 4, (0,200,0), -1);
# show
cv2.imshow("Display", display);
key = cv2.waitKey(1);
# check keypresses
done = key == ord('q');
# generate prime number id's for each point
# if you have a lot of points, it would be worth it
# to just have a .txt file with a bunch of pre-gen primes in it
primes = genPrimes(len(points));
print(primes);
withPrimes = [];
for a in range(len(points)):
withPrimes.append([primes[a], points[a]]);
# run brute-force search over all points
rects = {};
for a in range(len(withPrimes)):
getAllRects(withPrimes, [], rects, withPrimes[a]);
print(len(rects));
# extract just the points (don't need the prime id's anymore)
boxes = [];
for key in rects:
box = [];
for item in rects[key]:
box.append([item[1]]);
boxes.append(np.array(box));
# go through all of the boxes and un-intersect their sides
for a in range(len(boxes)):
boxes[a] = noNoodles(boxes[a]);
# draw each one to check for noodles
# for box in boxes:
# blank = np.zeros_like(bg, np.uint8);
# cv2.drawContours(blank, [box], -1, (255,255,255), -1);
# cv2.imshow("Box", blank);
# cv2.waitKey(0);
# noodles have been squared get best box
sortedBoxes = evaluate(boxes);
bestBox = sortedBoxes[0][1];
# draw
blank = np.zeros_like(bg, np.uint8);
cv2.drawContours(blank, [bestBox], -1, (255,255,255), -1);
for point in points:
cv2.circle(blank, point, 4, (0,200,0), -1);
cv2.imshow("Best", blank);
cv2.waitKey(0);
I have a list of points in 2D space that form an (imperfect) grid:
x x x x
x x x x
x
x x x
x x x x
What's the best way to fit these to a rigid grid (i.e. create a two-dimendional array and work out where each point fits in that array)?
There are no holes in the grid, but I don't know in advance what its dimensions are.
EDIT: The grid is not necessarily regular (not even spacing between rows/cols)
A little bit of an image processing approach:
If you think of what you have as a binary image where the X is 1 and the rest is 0, you can sum up rows and columns, and use a peak finding algorithm to identify peaks which would correspond to x and y lines of the grid:
Your points as a binary image:
Sums of row/columns
Now apply some smoothing technique to the signal (e.g. lowess):
I'm sure you get the idea :-)
Good luck
The best I could come up with is a brute-force solution that calculates the grid dimensions that minimize the error in the square of the Euclidean distance between the point and its nearest grid intersection.
This assumes that the number of points p is exactly equal to the number of columns times the number of rows, and that each grid intersection has exactly one point on it. It also assumes that the minimum x/y value for any point is zero. If the minimum is greater than zero, just subtract the minimum x value from each point's x coordinate and the minimum y value from each point's y coordinate.
The idea is to create all of the possible grid dimensions given the number of points. In the example above with 16 points, we would make grids with dimensions 1x16, 2x8, 4x4, 8x2 and 16x1. For each of these grids we calculate where the grid intersections would lie by dividing the maximum width of the points by the number of columns minus 1, and the maximum height of the points by the number of rows minus 1. Then we fit each point to its closest grid intersection and find the error (square of the distance) between the point and the intersection. (Note that this only works if each point is closer to its intended grid intersection than to any other intersection.)
After summing the errors for each grid configuration individually (e.g. getting one error value for the 1x16 configuration, another for the 2x8 configuration and so on), we select the configuration with the lowest error.
Initialization:
P is the set of points such that P[i][0] is the x-coordinate and
P[i][1] is the y-coordinate
Let p = |P| or the number of points in P
Let max_x = the maximum x-coordinate in P
Let max_y = the maximum y-coordinate in P
(minimum values are assumed to be zero)
Initialize min_error_dist = +infinity
Initialize min_error_cols = -1
Algorithm:
for (col_count = 1; col_count <= n; col_count++) {
// only compute for integer # of rows and cols
if ((p % col_count) == 0) {
row_count = n/col_count;
// Compute the width of the columns and height of the rows
// If the number of columns is 1, let the column width be max_x
// (and similarly for rows)
if (col_count > 1) col_width = max_x/(col_count-1);
else col_width=max_x;
if (row_count > 1) row_height = max_y/(row_count-1);
else row_height=max_y;
// reset the error for the new configuration
error_dist = 0.0;
for (i = 0; i < n; i++) {
// For the current point, normalize the x- and y-coordinates
// so that it's in the range 0..(col_count-1)
// and 0..(row_count-1)
normalized_x = P[i][0]/col_width;
normalized_y = P[i][1]/row_height;
// Error is the sum of the squares of the distances between
// the current point and the nearest grid point
// (in both the x and y direction)
error_dist += (normalized_x - round(normalized_x))^2 +
(normalized_y - round(normalized_y))^2;
}
if (error_dist < min_error_dist) {
min_error_dist = error_dist;
min_error_cols = col_count;
}
}
}
return min_error_cols;
Once you've got the number of columns (and thus the number of rows) you can recompute the normalized values for each point and round them to get the grid intersection they belong to.
In the end I used this algorithm, inspired by beaker's:
Calculate all the possible dimensions of the grid, given the total number of points
For each possible dimension, fit the points to that dimension and calculate the variance in alignment:
Order the points by x-value
Group the points into columns: the first r points form the first column, where r is the number of rows
Within each column, order the points by y-value to determine which row they're in
For each row/column, calcuate the range of y-values/x-values
The variance in alignment is the maximum range found
Choose the dimension with the least variance in alignment
I wrote this algorithm that accounts for missing coordinates as well as coordinates with errors.
Python Code
# Input [x, y] coordinates of a 'sparse' grid with errors
xys = [[103,101],
[198,103],
[300, 99],
[ 97,205],
[304,202],
[102,295],
[200,303],
[104,405],
[205,394],
[298,401]]
def row_col_avgs(num_list, ratio):
# Finds the average of each row and column. Coordinates are
# assigned to a row and column by specifying an error ratio.
last_num = 0
sum_nums = 0
count_nums = 0
avgs = []
num_list.sort()
for num in num_list:
if num > (1 + ratio) * last_num and count_nums != 0:
avgs.append(int(round(sum_nums/count_nums,0)))
sum_nums = num
count_nums = 1
else:
sum_nums = sum_nums + num
count_nums = count_nums + 1
last_num = num
avgs.append(int(round(sum_nums/count_nums,0)))
return avgs
# Split coordinates into two lists of x's and y's
xs, ys = map(list, zip(*xys))
# Find averages of each row and column within a specified error.
x_avgs = row_col_avgs(xs, 0.1)
y_avgs = row_col_avgs(ys, 0.1)
# Return Completed Averaged Grid
avg_grid = []
for y_avg in y_avgs:
avg_row = []
for x_avg in x_avgs:
avg_row.append([int(x_avg), int(y_avg)])
avg_grid.append(avg_row)
print(avg_grid)
Code Output
[[[102, 101], [201, 101], [301, 101]],
[[102, 204], [201, 204], [301, 204]],
[[102, 299], [201, 299], [301, 299]],
[[102, 400], [201, 400], [301, 400]]]
I am also looking for another solution using linear algebra. See my question here.
I'm not sure how to approach this problem. I'm not sure how complex a task it is. My aim is to have an algorithm that generates any polygon. My only requirement is that the polygon is not complex (i.e. sides do not intersect). I'm using Matlab for doing the maths but anything abstract is welcome.
Any aid/direction?
EDIT:
I was thinking more of code that could generate any polygon even things like this:
I took #MitchWheat and #templatetypedef's idea of sampling points on a circle and took it a bit farther.
In my application I need to be able to control how weird the polygons are, ie start with regular polygons and as I crank up the parameters they get increasingly chaotic. The basic idea is as stated by #templatetypedef; walk around the circle taking a random angular step each time, and at each step put a point at a random radius. In equations I'm generating the angular steps as
where theta_i and r_i give the angle and radius of each point relative to the centre, U(min, max) pulls a random number from a uniform distribution, and N(mu, sigma) pulls a random number from a Gaussian distribution, and clip(x, min, max) thresholds a value into a range. This gives us two really nice parameters to control how wild the polygons are - epsilon which I'll call irregularity controls whether or not the points are uniformly space angularly around the circle, and sigma which I'll call spikeyness which controls how much the points can vary from the circle of radius r_ave. If you set both of these to 0 then you get perfectly regular polygons, if you crank them up then the polygons get crazier.
I whipped this up quickly in python and got stuff like this:
Here's the full python code:
import math, random
from typing import List, Tuple
def generate_polygon(center: Tuple[float, float], avg_radius: float,
irregularity: float, spikiness: float,
num_vertices: int) -> List[Tuple[float, float]]:
"""
Start with the center of the polygon at center, then creates the
polygon by sampling points on a circle around the center.
Random noise is added by varying the angular spacing between
sequential points, and by varying the radial distance of each
point from the centre.
Args:
center (Tuple[float, float]):
a pair representing the center of the circumference used
to generate the polygon.
avg_radius (float):
the average radius (distance of each generated vertex to
the center of the circumference) used to generate points
with a normal distribution.
irregularity (float):
variance of the spacing of the angles between consecutive
vertices.
spikiness (float):
variance of the distance of each vertex to the center of
the circumference.
num_vertices (int):
the number of vertices of the polygon.
Returns:
List[Tuple[float, float]]: list of vertices, in CCW order.
"""
# Parameter check
if irregularity < 0 or irregularity > 1:
raise ValueError("Irregularity must be between 0 and 1.")
if spikiness < 0 or spikiness > 1:
raise ValueError("Spikiness must be between 0 and 1.")
irregularity *= 2 * math.pi / num_vertices
spikiness *= avg_radius
angle_steps = random_angle_steps(num_vertices, irregularity)
# now generate the points
points = []
angle = random.uniform(0, 2 * math.pi)
for i in range(num_vertices):
radius = clip(random.gauss(avg_radius, spikiness), 0, 2 * avg_radius)
point = (center[0] + radius * math.cos(angle),
center[1] + radius * math.sin(angle))
points.append(point)
angle += angle_steps[i]
return points
def random_angle_steps(steps: int, irregularity: float) -> List[float]:
"""Generates the division of a circumference in random angles.
Args:
steps (int):
the number of angles to generate.
irregularity (float):
variance of the spacing of the angles between consecutive vertices.
Returns:
List[float]: the list of the random angles.
"""
# generate n angle steps
angles = []
lower = (2 * math.pi / steps) - irregularity
upper = (2 * math.pi / steps) + irregularity
cumsum = 0
for i in range(steps):
angle = random.uniform(lower, upper)
angles.append(angle)
cumsum += angle
# normalize the steps so that point 0 and point n+1 are the same
cumsum /= (2 * math.pi)
for i in range(steps):
angles[i] /= cumsum
return angles
def clip(value, lower, upper):
"""
Given an interval, values outside the interval are clipped to the interval
edges.
"""
return min(upper, max(value, lower))
#MateuszKonieczny here is code to create an image of a polygon from a list of vertices.
vertices = generate_polygon(center=(250, 250),
avg_radius=100,
irregularity=0.35,
spikiness=0.2,
num_vertices=16)
black = (0, 0, 0)
white = (255, 255, 255)
img = Image.new('RGB', (500, 500), white)
im_px_access = img.load()
draw = ImageDraw.Draw(img)
# either use .polygon(), if you want to fill the area with a solid colour
draw.polygon(vertices, outline=black, fill=white)
# or .line() if you want to control the line thickness, or use both methods together!
draw.line(vertices + [vertices[0]], width=2, fill=black)
img.show()
# now you can save the image (img), or do whatever else you want with it.
There's a neat way to do what you want by taking advantage of the MATLAB classes DelaunayTri and TriRep and the various methods they employ for handling triangular meshes. The code below follows these steps to create an arbitrary simple polygon:
Generate a number of random points equal to the desired number of sides plus a fudge factor. The fudge factor ensures that, regardless of the result of the triangulation, we should have enough facets to be able to trim the triangular mesh down to a polygon with the desired number of sides.
Create a Delaunay triangulation of the points, resulting in a convex polygon that is constructed from a series of triangular facets.
If the boundary of the triangulation has more edges than desired, pick a random triangular facet on the edge that has a unique vertex (i.e. the triangle only shares one edge with the rest of the triangulation). Removing this triangular facet will reduce the number of boundary edges.
If the boundary of the triangulation has fewer edges than desired, or the previous step was unable to find a triangle to remove, pick a random triangular facet on the edge that has only one of its edges on the triangulation boundary. Removing this triangular facet will increase the number of boundary edges.
If no triangular facets can be found matching the above criteria, post a warning that a polygon with the desired number of sides couldn't be found and return the x and y coordinates of the current triangulation boundary. Otherwise, keep removing triangular facets until the desired number of edges is met, then return the x and y coordinates of triangulation boundary.
Here's the resulting function:
function [x, y, dt] = simple_polygon(numSides)
if numSides < 3
x = [];
y = [];
dt = DelaunayTri();
return
end
oldState = warning('off', 'MATLAB:TriRep:PtsNotInTriWarnId');
fudge = ceil(numSides/10);
x = rand(numSides+fudge, 1);
y = rand(numSides+fudge, 1);
dt = DelaunayTri(x, y);
boundaryEdges = freeBoundary(dt);
numEdges = size(boundaryEdges, 1);
while numEdges ~= numSides
if numEdges > numSides
triIndex = vertexAttachments(dt, boundaryEdges(:,1));
triIndex = triIndex(randperm(numel(triIndex)));
keep = (cellfun('size', triIndex, 2) ~= 1);
end
if (numEdges < numSides) || all(keep)
triIndex = edgeAttachments(dt, boundaryEdges);
triIndex = triIndex(randperm(numel(triIndex)));
triPoints = dt([triIndex{:}], :);
keep = all(ismember(triPoints, boundaryEdges(:,1)), 2);
end
if all(keep)
warning('Couldn''t achieve desired number of sides!');
break
end
triPoints = dt.Triangulation;
triPoints(triIndex{find(~keep, 1)}, :) = [];
dt = TriRep(triPoints, x, y);
boundaryEdges = freeBoundary(dt);
numEdges = size(boundaryEdges, 1);
end
boundaryEdges = [boundaryEdges(:,1); boundaryEdges(1,1)];
x = dt.X(boundaryEdges, 1);
y = dt.X(boundaryEdges, 2);
warning(oldState);
end
And here are some sample results:
The generated polygons could be either convex or concave, but for larger numbers of desired sides they will almost certainly be concave. The polygons are also generated from points randomly generated within a unit square, so polygons with larger numbers of sides will generally look like they have a "squarish" boundary (such as the lower right example above with the 50-sided polygon). To modify this general bounding shape, you can change the way the initial x and y points are randomly chosen (i.e. from a Gaussian distribution, etc.).
For a convex 2D polygon (totally off the top of my head):
Generate a random radius, R
Generate N random points on the circumference of a circle of Radius R
Move around the circle and draw straight lines between adjacent points on the circle.
As #templatetypedef and #MitchWheat said, it is easy to do so by generating N random angles and radii. It is important to sort the angles, otherwise it will not be a simple polygon. Note that I am using a neat trick to draw closed curves - I described it in here. By the way, the polygons might be concave.
Note that all of these polygons will be star shaped. Generating a more general polygon is not a simple problem at all.
Just to give you a taste of the problem - check out
http://www.cosy.sbg.ac.at/~held/projects/rpg/rpg.html
and http://compgeom.cs.uiuc.edu/~jeffe/open/randompoly.html.
function CreateRandomPoly()
figure();
colors = {'r','g','b','k'};
for i=1:5
[x,y]=CreatePoly();
c = colors{ mod(i-1,numel(colors))+1};
plotc(x,y,c);
hold on;
end
end
function [x,y]=CreatePoly()
numOfPoints = randi(30);
theta = randi(360,[1 numOfPoints]);
theta = theta * pi / 180;
theta = sort(theta);
rho = randi(200,size(theta));
[x,y] = pol2cart(theta,rho);
xCenter = randi([-1000 1000]);
yCenter = randi([-1000 1000]);
x = x + xCenter;
y = y + yCenter;
end
function plotc(x,y,varargin)
x = [x(:) ; x(1)];
y = [y(:) ; y(1)];
plot(x,y,varargin{:})
end
Here is a working port for Matlab of Mike Ounsworth solution. I did not optimized it for matlab. I might update the solution later for that.
function [points] = generatePolygon(ctrX, ctrY, aveRadius, irregularity, spikeyness, numVerts)
%{
Start with the centre of the polygon at ctrX, ctrY,
then creates the polygon by sampling points on a circle around the centre.
Randon noise is added by varying the angular spacing between sequential points,
and by varying the radial distance of each point from the centre.
Params:
ctrX, ctrY - coordinates of the "centre" of the polygon
aveRadius - in px, the average radius of this polygon, this roughly controls how large the polygon is, really only useful for order of magnitude.
irregularity - [0,1] indicating how much variance there is in the angular spacing of vertices. [0,1] will map to [0, 2pi/numberOfVerts]
spikeyness - [0,1] indicating how much variance there is in each vertex from the circle of radius aveRadius. [0,1] will map to [0, aveRadius]
numVerts - self-explanatory
Returns a list of vertices, in CCW order.
Website: https://stackoverflow.com/questions/8997099/algorithm-to-generate-random-2d-polygon
%}
irregularity = clip( irregularity, 0,1 ) * 2*pi/ numVerts;
spikeyness = clip( spikeyness, 0,1 ) * aveRadius;
% generate n angle steps
angleSteps = [];
lower = (2*pi / numVerts) - irregularity;
upper = (2*pi / numVerts) + irregularity;
sum = 0;
for i =1:numVerts
tmp = unifrnd(lower, upper);
angleSteps(i) = tmp;
sum = sum + tmp;
end
% normalize the steps so that point 0 and point n+1 are the same
k = sum / (2*pi);
for i =1:numVerts
angleSteps(i) = angleSteps(i) / k;
end
% now generate the points
points = [];
angle = unifrnd(0, 2*pi);
for i =1:numVerts
r_i = clip( normrnd(aveRadius, spikeyness), 0, 2*aveRadius);
x = ctrX + r_i* cos(angle);
y = ctrY + r_i* sin(angle);
points(i,:)= [(x),(y)];
angle = angle + angleSteps(i);
end
end
function value = clip(x, min, max)
if( min > max ); value = x; return; end
if( x < min ) ; value = min; return; end
if( x > max ) ; value = max; return; end
value = x;
end
I have N scalable square tiles (buttons) that need to be placed inside of fixed sized rectangular surface (toolbox). I would like to present the buttons all at the same size.
How could I solve for the optimal size of the tiles that would provide the largest area of the rectangular surface being covered by tiles.
Let W and H be the width and height of the rectangle.
Let s be the length of the side of a square.
Then the number of squares n(s) that you can fit into the rectangle is floor(W/s)*floor(H/s). You want to find the maximum value of s for which n(s) >= N
If you plot the number of squares against s you will get a piecewise constant function. The discontinuities are at the values W/i and H/j, where i and j run through the positive integers.
You want to find the smallest i for which n(W/i) >= N, and similarly the smallest j for which n(H/j) >= N. Call these smallest values i_min and j_min. Then the largest of W/i_min and H/j_min is the s that you want.
I.e. s_max = max(W/i_min,H/j_min)
To find i_min and j_min, just do a brute force search: for each, start from 1, test, and increment.
In the event that N is very large, it may be distasteful to search the i's and j's starting from 1 (although it is hard to imagine that there will be any noticeable difference in performance). In this case, we can estimate the starting values as follows. First, a ballpark estimate of the area of a tile is W*H/N, corresponding to a side of sqrt(W*H/N). If W/i <= sqrt(W*H/N), then i >= ceil(W*sqrt(N/(W*H))), similarly j >= ceil(H*sqrt(N/(W*H)))
So, rather than start the loops at i=1 and j=1, we can start them at i = ceil(sqrt(N*W/H)) and j = ceil(sqrt(N*H/W))). And OP suggests that round works better than ceil -- at worst an extra iteration.
Here's the algorithm spelled out in C++:
#include <math.h>
#include <algorithm>
// find optimal (largest) tile size for which
// at least N tiles fit in WxH rectangle
double optimal_size (double W, double H, int N)
{
int i_min, j_min ; // minimum values for which you get at least N tiles
for (int i=round(sqrt(N*W/H)) ; ; i++) {
if (i*floor(H*i/W) >= N) {
i_min = i ;
break ;
}
}
for (int j=round(sqrt(N*H/W)) ; ; j++) {
if (floor(W*j/H)*j >= N) {
j_min = j ;
break ;
}
}
return std::max (W/i_min, H/j_min) ;
}
The above is written for clarity. The code can be tightened up considerably as follows:
double optimal_size (double W, double H, int N)
{
int i,j ;
for (i = round(sqrt(N*W/H)) ; i*floor(H*i/W) < N ; i++){}
for (j = round(sqrt(N*H/W)) ; floor(W*j/H)*j < N ; j++){}
return std::max (W/i, H/j) ;
}
I believe this can be solved as a constrained minimisation problem, which requires some basic calculus. .
Definitions:
a, l -> rectangle sides
k -> number of squares
s -> side of the squares
You have to minimise the function:
f[s]:= a * l/s^2 - k
subject to the constraints:
IntegerPart[a/s] IntegerPart[l/s] - k >= 0
s > 0
I programed a little Mathematica function to do the trick
f[a_, l_, k_] := NMinimize[{a l/s^2 - k ,
IntegerPart[a/s] IntegerPart[l/s] - k >= 0,
s > 0},
{s}]
Easy to read since the equations are the same as above.
Using this function I made up a table for allocating 6 squares
as far as I can see, the results are correct.
As I said, you may use a standard calculus package for your environment, or you may also develop your own minimisation algorithm and programs. Ring the bell if you decide for the last option and I'll provide a few good pointers.
HTH!
Edit
Just for fun I made a plot with the results.
And for 31 tiles:
Edit 2: Characteristic Parameters
The problem has three characteristic parameters:
The Resulting Size of the tiles
The Number of Tiles
The ratio l/a of the enclosing rectangle
Perhaps the last one may result somewhat surprising, but it is easy to understand: if you have a problem with a 7x5 rectangle and 6 tiles to place, looking in the above table, the size of the squares will be 2.33. Now, if you have a 70x50 rectangle, obviously the resulting tiles will be 23.33, scaling isometrically with the problem.
So, we can take those three parameters and construct a 3D plot of their relationship, and eventually match the curve with some function easier to calculate (using least squares for example or computing iso-value regions).
Anyway, the resulting scaled plot is:
I realize this is an old thread but I recently solved this problem in a way that I think is efficient and always gives the correct answer. It is designed to maintain a given aspect ratio. If you wish for the children(buttons in this case) to be square just use an aspect ratio of 1. I am currently using this algorithm in a few places and it works great.
double VerticalScale; // for the vertical scalar: uses the lowbound number of columns
double HorizontalScale;// horizontal scalar: uses the highbound number of columns
double numColumns; // the exact number of columns that would maximize area
double highNumRows; // number of rows calculated using the upper bound columns
double lowNumRows; // number of rows calculated using the lower bound columns
double lowBoundColumns; // floor value of the estimated number of columns found
double highBoundColumns; // ceiling value of the the estimated number of columns found
Size rectangleSize = new Size(); // rectangle size will be used as a default value that is the exact aspect ratio desired.
//
// Aspect Ratio = h / w
// where h is the height of the child and w is the width
//
// the numerator will be the aspect ratio and the denominator will always be one
// if you want it to be square just use an aspect ratio of 1
rectangleSize.Width = desiredAspectRatio;
rectangleSize.Height = 1;
// estimate of the number of columns useing the formula:
// n * W * h
// columns = SquareRoot( ------------- )
// H * w
//
// Where n is the number of items, W is the width of the parent, H is the height of the parent,
// h is the height of the child, and w is the width of the child
numColumns = Math.Sqrt( (numRectangles * rectangleSize.Height * parentSize.Width) / (parentSize.Height * rectangleSize.Width) );
lowBoundColumns = Math.Floor(numColumns);
highBoundColumns = Math.Ceiling(numColumns);
// The number of rows is determined by finding the floor of the number of children divided by the columns
lowNumRows = Math.Ceiling(numRectangles / lowBoundColumns);
highNumRows = Math.Ceiling(numRectangles / highBoundColumns);
// Vertical Scale is what you multiply the vertical size of the child to find the expected area if you were to find
// the size of the rectangle by maximizing by rows
//
// H
// Vertical Scale = ----------
// R * h
//
// Where H is the height of the parent, R is the number of rows, and h is the height of the child
//
VerticalScale = parentSize.Height / lowNumRows * rectangleSize.Height;
//Horizontal Scale is what you multiply the horizintale size of the child to find the expected area if you were to find
// the size of the rectangle by maximizing by columns
//
// W
// Vertical Scale = ----------
// c * w
//
//Where W is the width of the parent, c is the number of columns, and w is the width of the child
HorizontalScale = parentSize.Width / (highBoundColumns * rectangleSize.Width);
// The Max areas are what is used to determine if we should maximize over rows or columns
// The areas are found by multiplying the scale by the appropriate height or width and finding the area after the scale
//
// Horizontal Area = Sh * w * ( (Sh * w) / A )
//
// where Sh is the horizontal scale, w is the width of the child, and A is the aspect ratio of the child
//
double MaxHorizontalArea = (HorizontalScale * rectangleSize.Width) * ((HorizontalScale * rectangleSize.Width) / desiredAspectRatio);
//
//
// Vertical Area = Sv * h * (Sv * h) * A
// Where Sv isthe vertical scale, h is the height of the child, and A is the aspect ratio of the child
//
double MaxVerticalArea = (VerticalScale * rectangleSize.Height) * ((VerticalScale * rectangleSize.Height) * desiredAspectRatio);
if (MaxHorizontalArea >= MaxVerticalArea ) // the horizontal are is greater than the max area then we maximize by columns
{
// the width is determined by dividing the parent's width by the estimated number of columns
// this calculation will work for NEARLY all of the horizontal cases with only a few exceptions
newSize.Width = parentSize.Width / highBoundColumns; // we use highBoundColumns because that's what is used for the Horizontal
newSize.Height = newSize.Width / desiredAspectRatio; // A = w/h or h= w/A
// In the cases that is doesnt work it is because the height of the new items is greater than the
// height of the parents. this only happens when transitioning to putting all the objects into
// only one row
if (newSize.Height * Math.Ceiling(numRectangles / highBoundColumns) > parentSize.Height)
{
//in this case the best solution is usually to maximize by rows instead
double newHeight = parentSize.Height / highNumRows;
double newWidth = newHeight * desiredAspectRatio;
// However this doesn't always work because in one specific case the number of rows is more than actually needed
// and the width of the objects end up being smaller than the size of the parent because we don't have enough
// columns
if (newWidth * numRectangles < parentSize.Width)
{
//When this is the case the best idea is to maximize over columns again but increment the columns by one
//This takes care of it for most cases for when this happens.
newWidth = parentSize.Width / Math.Ceiling(numColumns++);
newHeight = newWidth / desiredAspectRatio;
// in order to make sure the rectangles don't go over bounds we
// increment the number of columns until it is under bounds again.
while (newWidth * numRectangles > parentSize.Width)
{
newWidth = parentSize.Width / Math.Ceiling(numColumns++);
newHeight = newWidth / desiredAspectRatio;
}
// however after doing this it is possible to have the height too small.
// this will only happen if there is one row of objects. so the solution is to make the objects'
// height equal to the height of their parent
if (newHeight > parentSize.Height)
{
newHeight = parentSize.Height;
newWidth = newHeight * desiredAspectRatio;
}
}
// if we have a lot of added items occaisionally the previous checks will come very close to maximizing both columns and rows
// what happens in this case is that neither end up maximized
// because we don't know what set of rows and columns were used to get us to where we are
// we must recalculate them with the current measurements
double currentCols = Math.Floor(parentSize.Width / newWidth);
double currentRows = Math.Ceiling(numRectangles/currentCols);
// now we check and see if neither the rows or columns are maximized
if ( (newWidth * currentCols ) < parentSize.Width && ( newHeight * Math.Ceiling(numRectangles/currentCols) ) < parentSize.Height)
{
// maximize by columns first
newWidth = parentSize.Width / currentCols;
newHeight = newSize.Width / desiredAspectRatio;
// if the columns are over their bounds, then maximized by the columns instead
if (newHeight * Math.Ceiling(numRectangles / currentCols) > parentSize.Height)
{
newHeight = parentSize.Height / currentRows;
newWidth = newHeight * desiredAspectRatio;
}
}
// finally we have the height of the objects as maximized using columns
newSize.Height = newHeight;
newSize.Width = newWidth;
}
}
else
{
//Here we use the vertical scale. We determine the height of the objects based upong
// the estimated number of rows.
// This work for all known cases
newSize.Height = parentSize.Height / lowNumRows;
newSize.Width = newSize.Height * desiredAspectRatio;
}
At the end of the algorithm 'newSize' holds the appropriate size. This is written in C# but it would be fairly easy to port to other languages.
The first, very rough heuristic is to take
s = floor( sqrt( (X x Y) / N) )
where s is the button-side-length, X and Y are the width and height of the toolbox, and N is the number of buttons.
In this case, s will be the MAXIMUM possible side-length. It is not necessarily possible to map this set of buttons onto the toolbar, however.
Imagine a toolbar that is 20 units by 1 unit with 5 buttons. The heuristic will give you a side length of 2 (area of 4), with a total covering area of 20. However, half of each button will be outside of the toolbar.
I would take an iterative approach here.
I would check if it is possible to fit all button in a single row.
If not, check if it is possible to fit in two rows, and so on.
Say W is the smaller side of the toolbox.
H is the other side.
For each iteration, I would check for the best and worst possible cases, in that order. Best case means, say it is the nth iteration, would try a size of W/n X W/n sized buttons. If h value is enough then we are done. If not, the worst case is to try (W/(n+1))+1 X (W/(n+1))+1 sized buttons. If it is possible to fit all buttons, then i would try a bisection method between W/n and (W/(n+1))+1. If not iteration continues at n+1.
Let n(s) be the number of squares that can fit and s their side. Let W, H be the sides of the rectangle to fill. Then n(s) = floor(W/s)* floor(H/s). This is a monotonically decreasing function in s and also piecewise constant, so you can perform a slight modification of binary search to find the smallest s such that n(s) >= N but n(s+eps) < N. You start with an upper and lower bound on s u = min(W, H) and l = floor(min(W,H)/N) then compute t = (u + l) / 2. If n(t) >= N
then l = min(W/floor(W/t), H/floor(H/t)) otherwise u = max(W/floor(W/t), H/floor(H/t)). Stop when u and l stay the same in consecutive iterations.
So it's like binary search, but you exploit the fact that the function is piecewise constant and the change points are when W or H are an exact multiple of s. Nice little problem, thanks for proposing it.
We know that any optimal solution (there may be two) will fill the rectangle either horizontally or vertically. If you found an optimal solution that did not fill the rectangle in one dimension, you could always increase the scale of the tiles to fill one dimension.
Now, any solution that maximizes the surface covered will have an aspect ratio close to the aspect ratio of the rectangle. The aspect ratio of the solution is vertical tile count/horizontal tile count (and the aspect ratio of the rectangle is Y/X).
You can simplify the problem by forcing Y>=X; in other words, if X>Y, transpose the rectangle. This allows you to only think about aspect ratios >= 1, as long as you remember to transpose the solution back.
Once you've calculated the aspect ratio, you want to find solutions to the problem of V/H ~= Y/X, where V is the vertical tile count and H is the horizontal tile count. You will find up to three solutions: the closest V/H to Y/X and V+1, V-1. At that point, just calculate the coverage based on the scale using V and H and take the maximum (there could be more than one).
I need to calculate length of the object in a binary image (maximum distance between the pixels inside the object). As it is a binary image, so we might consider it a 2D array with values 0 (white) and 1 (black). The thing I need is a clever (and preferably simple) algorithm to perform this operation. Keep in mind there are many objects in the image.
The image to clarify:
Sample input image:
I think the problem is simple if the boundary of an object is convex and no three vertices are on a line (i.e. no vertex can be removed without changing the polygon): Then you can just pick two points at random and use a simple gradient-descent type search to find the longest line:
Start with random vertices A, B
See if the line A' - B is longer than A - B where A' is the point left of A; if so, replace A with A'
See if the line A' - B is longer than A - B where A' is the point right of A; if so, replace A with A'
Do the same for B
repeat until convergence
So I'd suggest finding the convex hull for each seed blob, removing all "superfluos" vertices (to ensure convergence) and running the algorithm above.
Constructing a convex hull is an O(n log n) operation IIRC, where n is the number of boundary pixels. Should be pretty efficient for small objects like these. EDIT: I just remembered that the O(n log n) for the convex hull algorithm was needed to sort the points. If the boundary points are the result of a connected component analysis, they are already sorted. So the whole algorithm should run in O(n) time, where n is the number of boundary points. (It's a lot of work, though, because you might have to write your own convex-hull algorithm or modify one to skip the sort.)
Add: Response to comment
If you don't need 100% accuracy, you could simply fit an ellipse to each blob and calculate the length of the major axis: This can be computed from central moments (IIRC it's simply the square root if the largest eigenvalue of the covariance matrix), so it's an O(n) operation and can efficiently be calculated in a single sweep over the image. It has the additional advantage that it takes all pixels of a blob into account, not just two extremal points, i.e. it is far less affected by noise.
Find the major-axis length of the ellipse that has the same normalized second central moments as the region. In MATLAB you can use regionprops.
A very crude, brute-force approach would be to first identify all the edge pixels (any black pixel in the object adjacent to a non-black pixel) and calculate the distances between all possible pairs of edge pixels. The longest of these distances will give you the length of the object.
If the objects are always shaped like the ones in your sample, you could speed this up by only evaluating the pixels with the highest and lowest x and y values within the object.
I would suggest trying an "reverse" distance transform. In the magical world of mathematical morphology (sorry couldn't resist the alliteration) the distance transform gives you the closest distance of each pixel to its nearest boundary pixel. In your case, you are interested in the farthest distance to a boundary pixel, hence I have cleverly applied a "reverse" prefix.
You can find information on the distance transform here and here. I believe that matlab implements the distance transform as per here. That would lead me to believe that you can find an open source implementation of the distance transform in octave. Furthermore, it would not surprise me in the least if opencv implemented it.
I haven't given it much thought but its intuitive to me that you should be able to reverse the distance transform and calculate it in roughly the same amount of time as the original distance transform.
I think you could consider using a breadth first search algorithm.
The basic idea is that you loop over each row and column in the image, and if you haven't visited the node (a node is a row and column with a colored pixel) yet, then you would run the breadth first search. You would visit each node you possibly could, and keep track of the max and min points for the object.
Here's some C++ sample code (untested):
#include <vector>
#include <queue>
#include <cmath>
using namespace std;
// used to transition from given row, col to each of the
// 8 different directions
int dr[] = { -1, 0, 1, -1, 1, -1, 0, 1 };
int dc[] = { -1, -1, -1, 0, 0, 1, 1, 1 };
// WHITE or COLORED cells
const int WHITE = 0;
const int COLORED = 1;
// number of rows and columns
int nrows = 2000;
int ncols = 2000;
// assume G is the image
int G[2000][2000];
// the "visited array"
bool vis[2000][2000];
// get distance between 2 points
inline double getdist(double x1, double y1, double x2, double y2) {
double d1 = x1 - x2;
double d2 = y1 - y2;
return sqrt(d1*d1+d2*d2);
}
// this function performs the breadth first search
double bfs(int startRow, int startCol) {
queue< int > q;
q.push(startRow);
q.push(startCol);
vector< pair< int, int > > points;
while(!q.empty()) {
int r = q.front();
q.pop();
int c = q.front();
q.pop();
// already visited?
if (vis[r][c])
continue;
points.push_back(make_pair(r,c));
vis[r][c] = true;
// try all eight directions
for(int i = 0; i < 8; ++i) {
int nr = r + dr[i];
int nc = c + dc[i];
if (nr < 0 || nr >= nrows || nc < 0 || nc >= ncols)
continue; // out of bounds
// push next node on queue
q.push(nr);
q.push(nc);
}
}
// the distance is maximum difference between any 2 points encountered in the BFS
double diff = 0;
for(int i = 0; i < (int)points.size(); ++i) {
for(int j = i+1; j < (int)points.size(); ++j) {
diff = max(diff,getdist(points[i].first,points[i].second,points[j].first,points[j].second));
}
}
return diff;
}
int main() {
vector< double > lengths;
memset(vis,false,sizeof vis);
for(int r = 0; r < nrows; ++r) {
for(int c = 0; c < ncols; ++c) {
if (G[r][c] == WHITE)
continue; // we don't care about cells without objects
if (vis[r][c])
continue; // we've already processed this object
// find the length of this object
double len = bfs(r,c);
lengths.push_back(len);
}
}
return 0;
}