How to calculate the average distance between 2 neighboring H3 Cells? - h3

I'm using the Uber H3 Library for Java https://github.com/uber/h3-java
I'm trying to figure out what is the average distance between 2 neighboring H3 cells of resolution 15 (the distance between their respective center points, or 2 x apothem).
I'm getting 3 significantly different results, depending on how I try to calculate:
1.148 meters
0.882 meters
0.994 meters
Which one is correct ? And why am I getting such different results ?
public static final long RES_15_HEXAGON = 644569124665188486L; // some random res 15 H3 index (as Long)
private final H3Core h3;
H3UtilsTest() throws IOException {
h3 = H3Core.newInstance();
}
#Test
void calculateDistanceBetweenNeighborsFromRealEdges() {
final long h3Index = RES_15_HEXAGON;
final GeoCoord centerPoint = h3.h3ToGeo(h3Index);
assertEquals(46.94862876826281, centerPoint.lat);
assertEquals(7.4404471879141205, centerPoint.lng);
assertEquals(15, h3.h3GetResolution(h3Index));
final List<GeoCoord> hexagon = this.h3.h3ToGeoBoundary(h3Index);
assertEquals(6, hexagon.size());
List<Double> edgeLengths = new ArrayList<>();
for (int i = 0; i < hexagon.size(); i++) {
edgeLengths.add(h3.pointDist(hexagon.get(i), hexagon.get((i + 1) % hexagon.size()), LengthUnit.m));
}
final double apothem = edgeLengths.stream().mapToDouble(Double::doubleValue).average().getAsDouble();
assertEquals(0.5739852101653261, apothem);
final double distanceBetweenNeighbors = 2 * apothem;
assertEquals(1.1479704203306522, distanceBetweenNeighbors);
}
#Test
void calculateDistanceBetweenNeighborsFromAverageEdges() {
final double averageEdgeLength = h3.edgeLength(15, LengthUnit.m);
assertEquals(0.509713273, averageEdgeLength);
assertEquals(1.019426546, 2 * averageEdgeLength);
final double apothem = averageEdgeLength * Math.sqrt(3) / 2;
assertEquals(0.4414246430641128, apothem);
final double distanceBetweenNeighbors = 2 * apothem;
assertEquals(0.8828492861282256, distanceBetweenNeighbors);
}
#Test
void calculateDistanceBetweenNeighborsFromNeighbors() {
final GeoCoord origin = h3.h3ToGeo(RES_15_HEXAGON);
final List<Long> neighbors = h3.kRing(RES_15_HEXAGON, 1);
assertEquals(7, neighbors.size()); // contains the center hexagon as well
neighbors.forEach(neighbor -> assertEquals(6, h3.h3ToGeoBoundary(neighbor).size())); // they are really 6-sided hexagons !
final List<Double> distances = neighbors.stream().filter(neighbor -> neighbor != RES_15_HEXAGON).map(neighbor -> h3.pointDist(origin, h3.h3ToGeo(neighbor), LengthUnit.m)).toList();
assertEquals(6, distances.size());
final Double distanceBetweenNeighbors = distances.stream().mapToDouble(Double::doubleValue).average().getAsDouble();
assertEquals(0.9941567117250641, distanceBetweenNeighbors);
}

I assume you're getting different answers for these different approaches due to shape distortion of the "hexagon" shape across the grid. This is the cell you chose and its neighbors:
You can see that the hexagons are not perfectly regular - while H3 tries to minimize distortion, we do have shape distortion that varies across the globe. Cell size and shape will vary slightly depending on where you choose to sample.
The "correct" answer here depends heavily on your use case. We can likely calculate an average distance between cell centers for the whole globe, but you might actually care about a more localized area. Or the average may not be what you really need - you may want to calculate this on a per-cell basis. In most cases, I have found that it is better to either:
Calculate the actual distances for the cells of interest, using pointDist between cell centers, or
Sample within an area of interest, e.g. an application viewport, and calculate an average from the sample.

For what it's worth, I implemented a way to create a larger sample, inspired by the suggestion by #nrabinowitz:
public static final long RES_15_HEXAGON = 644569124665188486L;
private final H3Core h3;
H3UtilsTest() throws IOException {
h3 = H3Core.newInstance();
}
#Test
void calculateDistanceBetweenNeighborsUsingLargerSample() {
final Set<Long> origins = new HashSet<>(Set.of(RES_15_HEXAGON));
final Set<Long> visitedOrigins = new HashSet<>();
final Set<Long> nextOrigins = new HashSet<>();
final List<Double> distances = new ArrayList<>();
final int nbIterations = 10;
for (int i = 0; i < nbIterations; i++) {
for (Long origin : origins) {
visitedOrigins.add(origin);
final Set<Long> destinations = new HashSet<>(h3.kRing(origin, 1));
destinations.removeAll(visitedOrigins);
for (Long destination : destinations) {
distances.add(h3.pointDist(h3.h3ToGeo(origin), h3.h3ToGeo(destination), LengthUnit.m));
}
destinations.removeAll(origins); // we don't need the hexagons (anymore) that are on the same ring as origins
nextOrigins.addAll(destinations);
}
origins.clear();
origins.addAll(nextOrigins);
nextOrigins.clear();
}
final double averageDistanceBetweenNeighbors = distances.stream().mapToDouble(Double::doubleValue).average().getAsDouble();
assertEquals(0.9942, averageDistanceBetweenNeighbors, 0.0001);
assertEquals(870, distances.size());
}

Related

Custom input and output of altitude of the GeoPoint in OSMdroid

Can't read, evaluate the custom altitude for a given Geopoint in a List.
Basically I would like to have a path with x,y,Z, in Osmdroid. Z has to be ascending till middle of the path, then descending.(Z-value) Hold on, it has to be a Point or GeoPoint.
When I create a List it successfully adds a geopoint with altitude.
When I read it timer.setText("Your Coordinates are :" + marker.getPosition()+alt_S);
in this setup(animation, interpolation) altitude remains 0.0.
Please help.
Thanks
The code below, reads latitude and Longitude of a marker, but it doesn't read the altitude that I try to obtain, when I stop animation.
Following this post:
Osmdroid map marker animation
public void animateMarker(final Marker marker, final GeoPoint toPosition) {
final Handler handler = new Handler();
final long start = SystemClock.uptimeMillis();
Projection proj = map.getProjection();
Point startPoint = proj.toPixels(marker.getPosition(), null);
final IGeoPoint startGeoPoint = proj.fromPixels(startPoint.x, startPoint.y);
final long duration = 5000;
LinearInterpolator interpolator = new LinearInterpolator();
handler.post(new Runnable() {
#Override
public void run() {
long elapsed = SystemClock.uptimeMillis() - start;
float t = interpolator.getInterpolation((float) elapsed / duration);
for (int i = 0; i < geoPoints.size(); i++) {
ListIterator<GeoPoint> geoPTS = geoPoints.listIterator();
while (geoPTS.hasNext()) {
GeoPoint test = geoPTS.next();
// do something with o
double lng = t * toPosition.getLongitude() + (1 - t) * startGeoPoint.getLongitude();
double lat = t * toPosition.getLatitude() + (1 - t) * startGeoPoint.getLatitude();
double alt = test.getAltitude();
marker.setPosition(new GeoPoint(lat, lng,alt));
stopper = findViewById(R.id.buttonStop);
if (stopper.isPressed()) {
handler.removeCallbacks(this);
timer.getText();
alt_S = Double.toString(alt);
timer.setText("Your Coordinates are :" + marker.getPosition()+alt_S);
}
}
}
if (t < 1.0) {
handler.postDelayed(this, 15);
}
map.postInvalidate();
}
});
}
Folks, I ended up using fractions from 0.0001, than if my flying object exceeds or equal half of total 1, than subtract from altitude reached at the middle. Had to define duration of interpolation.(that's another story).

Processing | Program is lagging

I'm new to Processing and I need to make a program that, captured the main monitor, shows on the second screen the average color and makes a spiral using another color (perceptual dominant color) get by a function.
The problem is that the program is so slow (lag, 1FPS). I think it's because it has too many things to do everytime i do a screenshot, but I have no idea how to make it faster.
Also there could be many other problems, but the main one is that.
Thank you very much!
Here's the code:
import java.awt.Robot;
import java.awt.AWTException;
import java.awt.Rectangle;
import java.awt.color.ColorSpace;
PImage screenshot;
float a = 0;
int blockSize = 20;
int avg_c;
int per_c;
void setup() {
fullScreen(2); // 1920x1080
noStroke();
frame.removeNotify();
}
void draw() {
screenshot();
avg_c = extractColorFromImage(screenshot);
per_c = extractAverageColorFromImage(screenshot);
background(avg_c); // Average color
spiral();
}
void screenshot() {
try{
Robot robot_Screenshot = new Robot();
screenshot = new PImage(robot_Screenshot.createScreenCapture
(new Rectangle(0, 0, displayWidth, displayHeight)));
}
catch (AWTException e){ }
frame.setLocation(displayWidth/2, 0);
}
void spiral() {
fill (per_c);
for (int i = blockSize; i < width; i += blockSize*2)
{
ellipse(i, height/2+sin(a+i)*100, blockSize+cos(a+i)*5, blockSize+cos(a+i)*5);
a += 0.001;
}
}
color extractColorFromImage(PImage screenshot) { // Get average color
screenshot.loadPixels();
int r = 0, g = 0, b = 0;
for (int i = 0; i < screenshot.pixels.length; i++) {
color c = screenshot.pixels[i];
r += c>>16&0xFF;
g += c>>8&0xFF;
b += c&0xFF;
}
r /= screenshot.pixels.length; g /= screenshot.pixels.length; b /= screenshot.pixels.length;
return color(r, g, b);
}
color extractAverageColorFromImage(PImage screenshot) { // Get lab average color (perceptual)
float[] average = new float[3];
CIELab lab = new CIELab();
int numPixels = screenshot.pixels.length;
for (int i = 0; i < numPixels; i++) {
color rgb = screenshot.pixels[i];
float[] labValues = lab.fromRGB(new float[]{red(rgb),green(rgb),blue(rgb)});
average[0] += labValues[0];
average[1] += labValues[1];
average[2] += labValues[2];
}
average[0] /= numPixels;
average[1] /= numPixels;
average[2] /= numPixels;
float[] rgb = lab.toRGB(average);
return color(rgb[0] * 255,rgb[1] * 255, rgb[2] * 255);
}
public class CIELab extends ColorSpace {
#Override
public float[] fromCIEXYZ(float[] colorvalue) {
double l = f(colorvalue[1]);
double L = 116.0 * l - 16.0;
double a = 500.0 * (f(colorvalue[0]) - l);
double b = 200.0 * (l - f(colorvalue[2]));
return new float[] {(float) L, (float) a, (float) b};
}
#Override
public float[] fromRGB(float[] rgbvalue) {
float[] xyz = CIEXYZ.fromRGB(rgbvalue);
return fromCIEXYZ(xyz);
}
#Override
public float getMaxValue(int component) {
return 128f;
}
#Override
public float getMinValue(int component) {
return (component == 0)? 0f: -128f;
}
#Override
public String getName(int idx) {
return String.valueOf("Lab".charAt(idx));
}
#Override
public float[] toCIEXYZ(float[] colorvalue) {
double i = (colorvalue[0] + 16.0) * (1.0 / 116.0);
double X = fInv(i + colorvalue[1] * (1.0 / 500.0));
double Y = fInv(i);
double Z = fInv(i - colorvalue[2] * (1.0 / 200.0));
return new float[] {(float) X, (float) Y, (float) Z};
}
#Override
public float[] toRGB(float[] colorvalue) {
float[] xyz = toCIEXYZ(colorvalue);
return CIEXYZ.toRGB(xyz);
}
CIELab() {
super(ColorSpace.TYPE_Lab, 3);
}
private double f(double x) {
if (x > 216.0 / 24389.0) {
return Math.cbrt(x);
} else {
return (841.0 / 108.0) * x + N;
}
}
private double fInv(double x) {
if (x > 6.0 / 29.0) {
return x*x*x;
} else {
return (108.0 / 841.0) * (x - N);
}
}
private final ColorSpace CIEXYZ =
ColorSpace.getInstance(ColorSpace.CS_CIEXYZ);
private final double N = 4.0 / 29.0;
}
There's lots that can be done, even beyond what's already been mentioned.
Iteration & Threading
After taking the screenshot, immediately iterate over every 1/N pixels (perhaps every 4 or 8) of the buffered image. Then, during this iteration, calculate the LAB value for each pixel (as you have each pixel channel directly available), and meanwhile increment the running total of each RGB channel.
This saves us from iterating over the same pixels twice and avoids unncessary conversions (BufferedImage → PImage; and composing then decomposing pixel channels from PImage pixels).
Likewise, we avoid Processing's expensive resize() call (as suggested in another answer), which is not something we want to call every frame (even though it does speed the program up, it's not an efficient method).
Now, on top of iteration change, we can wrap the iteration in a Callable to easily run the workload across multiple system threads concurrently (after all, pixel iteration is embarrassingly parallel); the example below does this with 2 threads, each screenshotting and processing half of the display's pixels.
Optimise RGB→XYZ→LAB conversion
We're not so concerned about the backwards conversion since that's only done for one value per frame
It looks like you've implemented XYZ→LAB yourself and are using the RGB→XYZ converter from java.awt.color.
As has been identified, the forward conversion XYZ→LAB uses a cbrt() which is as a bottleneck. I also imagine that the RGB→XYZ implementation makes 3 calls to Math.Pow(x, 2.4) — 3 non-integer exponents per pixel adds considerably to the computation. The solution is faster math...
Jafama
Jafama is a drop-in java.math replacement -- simply import the library and replace any Math.__() calls with FastMath.__() for a free speedup (you could go even further by trading Jafama's E-15 precision with less accurate and even faster dedicated LUT-based classes).
So at the very least, swap out Math.cbrt() for FastMath.cbrt(). Then consider implementing RGB→XYZ yourself (example), again using Jafama in place of java.math.
You may even find that for such a project, converting to XYZ only is a sufficient color space to work with to overcome the well known weaknesses with RGB (and therefore save yourself from the XYZ→LAB conversion).
Cache LAB Calculation
Unless most pixels are changing every frame, then consider caching the LAB value for every pixel, recalculating it only when the pixel has changed between the current the previous frames. The tradeoff here is the overhead from checking every pixel against its previous value, versus how much calculation positive checks will save. Given that the LAB calculation is much more expensive it's very worthwhile here. The example below uses this technique.
Screen Capture
No matter how well optimised the rest of the program is, a considerable bottleneck is the AWT Robot's createScreenCapture(). It will struggles to go past 30FPS on large enough displays. I can't offer any exact advice but it's worth looking at other screen capture methods in Java.
Reworked code with iteration changes & threading
This code implements what has discussed above minus any changes to the LAB calculation.
float a = 0;
int blockSize = 20;
int avg_c;
int per_c;
java.util.concurrent.ExecutorService threadPool = java.util.concurrent.Executors.newFixedThreadPool(4);
List<java.util.concurrent.Callable<Boolean>> taskList;
float[] averageLAB;
int totalR = 0, totalG = 0, totalB = 0;
CIELab lab = new CIELab();
final int pixelStride = 8; // look at every 8th pixel
void setup() {
size(800, 800, FX2D);
noStroke();
frame.removeNotify();
taskList = new ArrayList<java.util.concurrent.Callable<Boolean>>();
Compute thread1 = new Compute(0, 0, width, height/2);
Compute thread2 = new Compute(0, height/2, width, height/2);
taskList.add(thread1);
taskList.add(thread2);
}
void draw() {
totalR = 0; // re init
totalG = 0; // re init
totalB = 0; // re init
averageLAB = new float[3]; // re init
final int numPixels = (width*height)/pixelStride;
try {
threadPool.invokeAll(taskList); // run threads now and block until completion of all
}
catch (Exception e) {
e.printStackTrace();
}
// calculate average LAB
averageLAB[0]/=numPixels;
averageLAB[1]/=numPixels;
averageLAB[2]/=numPixels;
final float[] rgb = lab.toRGB(averageLAB);
per_c = color(rgb[0] * 255, rgb[1] * 255, rgb[2] * 255);
// calculate average RGB
totalR/=numPixels;
totalG/=numPixels;
totalB/=numPixels;
avg_c = color(totalR, totalG, totalB);
background(avg_c); // Average color
spiral();
fill(255, 0, 0);
text(frameRate, 10, 20);
}
class Compute implements java.util.concurrent.Callable<Boolean> {
private final Rectangle screenRegion;
private Robot robot_Screenshot;
private final int[] previousRGB;
private float[][] previousLAB;
Compute(int x, int y, int w, int h) {
screenRegion = new Rectangle(x, y, w, h);
previousRGB = new int[w*h];
previousLAB = new float[w*h][3];
try {
robot_Screenshot = new Robot();
}
catch (AWTException e1) {
e1.printStackTrace();
}
}
#Override
public Boolean call() {
BufferedImage rawScreenshot = robot_Screenshot.createScreenCapture(screenRegion);
int[] ssPixels = new int[rawScreenshot.getWidth()*rawScreenshot.getHeight()]; // screenshot pixels
rawScreenshot.getRGB(0, 0, rawScreenshot.getWidth(), rawScreenshot.getHeight(), ssPixels, 0, rawScreenshot.getWidth()); // copy buffer to int[] array
for (int pixel = 0; pixel < ssPixels.length; pixel+=pixelStride) {
// get invididual colour channels
final int pixelColor = ssPixels[pixel];
final int R = pixelColor >> 16 & 0xFF;
final int G = pixelColor >> 8 & 0xFF;
final int B = pixelColor & 0xFF;
if (pixelColor != previousRGB[pixel]) { // if pixel has changed recalculate LAB value
float[] labValues = lab.fromRGB(new float[]{R/255f, G/255f, B/255f}); // note that I've fixed this; beforehand you were missing the /255, so it was always white.
previousLAB[pixel] = labValues;
}
averageLAB[0] += previousLAB[pixel][0];
averageLAB[1] += previousLAB[pixel][1];
averageLAB[2] += previousLAB[pixel][2];
totalR+=R;
totalG+=G;
totalB+=B;
previousRGB[pixel] = pixelColor; // cache last result
}
return true;
}
}
800x800px; pixelStride = 4; fairly static screen background
Yeesh, about 1 FPS on my machine:
To optimize code can be really hard, so instead of reading everything looking for stuff to improve, I started by testing where you were losing so much processing power. The answer was at this line:
per_c = extractAverageColorFromImage(screenshot);
The extractAverageColorFromImage method is well written, but it underestimate the amount of work it has to do. There is a quadratic relationship between the size of a screen and the number of pixels in this screen, so the bigger the screen the worst the situation. And this method is processing every pixel of the screenshot all the time, several time per screenshot.
This is a lot of work for an average color. Now, if there was a way to cut some corners... maybe a smaller screen, or a smaller screenshot... oh! there is! Let's resize the screenshot. After all, we don't need to go into such details as individual pixels for an average. In the screenshot method, add this line:
void screenshot() {
try {
Robot robot_Screenshot = new Robot();
screenshot = new PImage(robot_Screenshot.createScreenCapture(new Rectangle(0, 0, displayWidth, displayHeight)));
// ADD THE NEXT LINE
screenshot.resize(width/4, height/4);
}
catch (AWTException e) {
}
frame.setLocation(displayWidth/2, 0);
}
I divided the workload by 4, but I encourage you to tweak this number until you have the fastest satisfying result you can. This is just a proof of concept:
As you can see, resizing the screenshot and making it 4x smaller gives me 10x more speed. That's not a miracle, but it's much better, and I can't see a difference in the end result - but about that part, you'll have to use your own judgement, as you are the one who knows what your project is about. Hope it'll help!
Have fun!
Unfortunately I can't provide a detailed answer like laancelot (+1), but hopefully I can provide a few tips:
Resizing the image is definitely a good direction. Bare in mind you can also skip a number of pixels instead of incrementing every single pixel. (if you handle the pixel indices correctly, you can get a similar effect to resize without calling resize, though that won't save you a lot CPU time)
Don't create a new Robot instance multiple times a second. Create it once in setup and re-use it. (This is more of a good habit to get into)
Use a CPU profiler, such as the one in VisualVM to see what exactly is slow and aim to optimise the slowest stuff first.
point 1 example:
for (int i = 0; i < numPixels; i+= 100)
point 2 example:
Robot robot_Screenshot;
...
void setup() {
fullScreen(2); // 1920x1080
noStroke();
frame.removeNotify();
try{
robot_Screenshot = new Robot();
}catch(AWTException e){
println("error setting up screenshot Robot instance");
e.printStackTrace();
}
}
...
void screenshot() {
screenshot = new PImage(robot_Screenshot.createScreenCapture
(new Rectangle(0, 0, displayWidth, displayHeight)));
frame.setLocation(displayWidth/2, 0);
}
point 3 example:
Notice the slowest bit are actually AWT's fromRGB and Math.cbrt()
I'd suggest finding another alternative RGB -> XYZ -> L*a*b* conversion method that is simpler (mainly functions, less classes, with AWT or other dependencies) and hopefully faster.

Algorithm for finding area of all rectangles

Lets say we get rectangles in form of 4 points: their (x1, y1), ..., (x4, y4)
We want to calculate the total area they cover. We want to count the total area, if more rectangles overlap, we count this area only once.
I'm not really looking for finished solution, pseudocode or some links to algorithms and data structures useful here would be appreciated.
the form of rectangles is: given by three integers: left position, right position and height. for example:
L:0 R:2 H:2
L:1 R:3 H:3
L:-1 R:4 H:1
total area would be: 10
the max value for x axis is from -1e9 to 1e9, starts at x=L and ends at x=R
y cannot be lower than 0 and always starts at y=0 and ends at y=H
Let's assume that your rectangles have integer coordinates in a very small range, say 0 to 10. Then an easy approach is to create a grid and paint the rectangles onto it:
The occupied "pixels" could be stored in a set or they could be set bits in a bitmap. When rectangles overlap, the intersection is just marked as occupied again and so contributes only once the the area. The area is the number of occupied cells.
For large dimensions, the data structures would become too big. Also, painting a rectangle that is several million pixels wide would be slow.
But the technique can still be applied when we use compressed coordinates. Your grid then has only the coordinates that are actual coordinates of the rectangles. The cells have variable widths and heights. The number of cells depends on the number of rectangles, but it is independent of the minimum and maximum coordinates:
The algorithm would look like this:
Find all unique x coordinates. Sort them in ascending order and store (coordinate, index) pairs in a dictionary for easy look-up.
Do the same for the y coordinates.
Paint the rectangles: Find the indices of x and y ranges and occupy the cells between them.
Calculate the area by summing the area of all occupied cells.
These rectangles have their base at y=0? I'm assuming that's true. So these are like buildings in a city viewed from a distance. You're trying to trace the skyline.
Store the rectangles in one array so you can use the array indices as unique IDs. Represent each left and right rectangle edge as an "event" that includes the ID for the rectangle it belongs to and the x-coordinate of the corresponding edge. Put all the events in a list EL and sort by x-coordinate. Finally you'll need a dynamically sorted set (e.g. a Java TreeSet) of rectangle IDs sorted by corresponding rectangle height descending. It's called SL, the "sweep line". Due the way it's sorted, SL.first is always the ID of the highest rectangle currently referenced by the SL.
Now you can draw the outline of the collection of rectangles as follows:
SL = <empty> // sweep line
x0 = EL.first.left // leftmost x among all rectangle edges
lastX = x0
for each event E in EL // process events left-to-right
Let y0 = if SL.isEmpty then 0 else SL.first.height // current y
if E.ID in SL // event in SL means sweep line is at rectangle's right edge
remove E.ID from SL
else // event means sweep line is a new rectangle's left edge
add E.ID to SL
Let y1 = if SL.isEmpty then 0 else SL.first.height // new y
if y1 != y0
output line seg (lastX, y0) -> (E.x, y0)
output line seg (E.x, y0) -> (E.x, y1)
lastX = E.x
output final line seg (lastX, 0) -> (x0, 0)
Because this sounds like homework or maybe an interview question, I'll let you revise this algorithm to provide the area of the swept-out shape rather than drawing its edge.
Addition
Just for fun:
import java.util.ArrayList;
import static java.lang.Integer.compare;
import static java.util.Arrays.stream;
import static java.util.Collections.sort;
import java.util.Comparator;
import java.util.List;
import java.util.SortedSet;
import java.util.TreeSet;
class SkyLine {
static class Rectangle {
final int left;
final int right;
final int height;
Rectangle(int left, int right, int height) {
this.left = left;
this.right = right;
this.height = height;
}
}
static class Event implements Comparable<Event> {
final int x;
final int id;
public Event(int x, int id) {
this.x = x;
this.id = id;
}
#Override
public int compareTo(Event e) { return compare(x, e.x); }
}
final List<Rectangle> rectangles = new ArrayList<>();
final Comparator byHeightDescending =
(Comparator<Integer>) (Integer a, Integer b) ->
compare(rectangles.get(b).height, rectangles.get(a).height);
final SortedSet<Integer> scanLine = new TreeSet<>(byHeightDescending);
final List<Event> events = new ArrayList<>();
SkyLine(Rectangle [] data) {
stream(data).forEach(rectangles::add);
int id = 0;
for (Rectangle r : rectangles) {
events.add(new Event(r.left, id));
events.add(new Event(r.right, id));
++id;
}
sort(events);
}
int area() {
int area = 0;
Event ePrev = null;
for (Event e : events) {
if (ePrev != null) area += (e.x - ePrev.x) * rectangles.get(scanLine.first()).height;
if (!scanLine.remove(e.id)) scanLine.add(e.id);
ePrev = e;
}
return area;
}
public static void main(String [] args) {
Rectangle [] data = {
new Rectangle(0, 2, 2),
new Rectangle(1, 3, 3),
new Rectangle(-1, 4, 1),
};
int area = new SkyLine(data).area();
System.out.println(area);
}
}

An algorithm to space out overlapping rectangles?

This problem actually deals with roll-overs, I'll just generalized below as such:
I have a 2D view, and I have a number of rectangles within an area on the screen. How do I spread out those boxes such that they don't overlap each other, but only adjust them with minimal moving?
The rectangles' positions are dynamic and dependent on user's input, so their positions could be anywhere.
Attached images show the problem and desired solution
The real life problem deals with rollovers, actually.
Answers to the questions in the comments
Size of rectangles is not fixed, and is dependent on the length of the text in the rollover
About screen size, right now I think it's better to assume that the size of the screen is enough for the rectangles. If there is too many rectangles and the algo produces no solution, then I just have to tweak the content.
The requirement to 'move minimally' is more for asethetics than an absolute engineering requirement. One could space out two rectangles by adding a vast distance between them, but it won't look good as part of the GUI. The idea is to get the rollover/rectangle as close as to its source (which I will then connect to the source with a black line). So either 'moving just one for x' or 'moving both for half x' is fine.
I was working a bit in this, as I also needed something similar, but I had delayed the algorithm development. You helped me to get some impulse :D
I also needed the source code, so here it is. I worked it out in Mathematica, but as I haven't used heavily the functional features, I guess it'll be easy to translate to any procedural language.
A historic perspective
First I decided to develop the algorithm for circles, because the intersection is easier to calculate. It just depends on the centers and radii.
I was able to use the Mathematica equation solver, and it performed nicely.
Just look:
It was easy. I just loaded the solver with the following problem:
For each circle
Solve[
Find new coördinates for the circle
Minimizing the distance to the geometric center of the image
Taking in account that
Distance between centers > R1+R2 *for all other circles
Move the circle in a line between its center and the
geometric center of the drawing
]
As straightforward as that, and Mathematica did all the work.
I said "Ha! it's easy, now let's go for the rectangles!". But I was wrong ...
Rectangular Blues
The main problem with the rectangles is that querying the intersection is a nasty function. Something like:
So, when I tried to feed up Mathematica with a lot of these conditions for the equation, it performed so badly that I decided to do something procedural.
My algorithm ended up as follows:
Expand each rectangle size by a few points to get gaps in final configuration
While There are intersections
sort list of rectangles by number of intersections
push most intersected rectangle on stack, and remove it from list
// Now all remaining rectangles doesn't intersect each other
While stack not empty
pop rectangle from stack and re-insert it into list
find the geometric center G of the chart (each time!)
find the movement vector M (from G to rectangle center)
move the rectangle incrementally in the direction of M (both sides)
until no intersections
Shrink the rectangles to its original size
You may note that the "smallest movement" condition is not completely satisfied (only in one direction). But I found that moving the rectangles in any direction to satisfy it, sometimes ends up with a confusing map changing for the user.
As I am designing a user interface, I choose to move the rectangle a little further, but in a more predictable way. You can change the algorithm to inspect all angles and all radii surrounding its current position until an empty place is found, although it'll be much more demanding.
Anyway, these are examples of the results (before/ after):
Edit> More examples here
As you may see, the "minimum movement" is not satisfied, but the results are good enough.
I'll post the code here because I'm having some trouble with my SVN repository. I'll remove it when the problems are solved.
Edit:
You may also use R-Trees for finding rectangle intersections, but it seems an overkill for dealing with a small number of rectangles. And I haven't the algorithms already implemented. Perhaps someone else can point you to an existing implementation on your platform of choice.
Warning! Code is a first approach .. not great quality yet, and surely has some bugs.
It's Mathematica.
(*Define some functions first*)
Clear["Global`*"];
rn[x_] := RandomReal[{0, x}];
rnR[x_] := RandomReal[{1, x}];
rndCol[] := RGBColor[rn[1], rn[1], rn[1]];
minX[l_, i_] := l[[i]][[1]][[1]]; (*just for easy reading*)
maxX[l_, i_] := l[[i]][[1]][[2]];
minY[l_, i_] := l[[i]][[2]][[1]];
maxY[l_, i_] := l[[i]][[2]][[2]];
color[l_, i_]:= l[[i]][[3]];
intersectsQ[l_, i_, j_] := (* l list, (i,j) indexes,
list={{x1,x2},{y1,y2}} *)
(*A rect does intesect with itself*)
If[Max[minX[l, i], minX[l, j]] < Min[maxX[l, i], maxX[l, j]] &&
Max[minY[l, i], minY[l, j]] < Min[maxY[l, i], maxY[l, j]],
True,False];
(* Number of Intersects for a Rectangle *)
(* With i as index*)
countIntersects[l_, i_] :=
Count[Table[intersectsQ[l, i, j], {j, 1, Length[l]}], True]-1;
(*And With r as rectangle *)
countIntersectsR[l_, r_] := (
Return[Count[Table[intersectsQ[Append[l, r], Length[l] + 1, j],
{j, 1, Length[l] + 1}], True] - 2];)
(* Get the maximum intersections for all rectangles*)
findMaxIntesections[l_] := Max[Table[countIntersects[l, i],
{i, 1, Length[l]}]];
(* Get the rectangle center *)
rectCenter[l_, i_] := {1/2 (maxX[l, i] + minX[l, i] ),
1/2 (maxY[l, i] + minY[l, i] )};
(* Get the Geom center of the whole figure (list), to move aesthetically*)
geometryCenter[l_] := (* returs {x,y} *)
Mean[Table[rectCenter[l, i], {i, Length[l]}]];
(* Increment or decr. size of all rects by a bit (put/remove borders)*)
changeSize[l_, incr_] :=
Table[{{minX[l, i] - incr, maxX[l, i] + incr},
{minY[l, i] - incr, maxY[l, i] + incr},
color[l, i]},
{i, Length[l]}];
sortListByIntersections[l_] := (* Order list by most intersecting Rects*)
Module[{a, b},
a = MapIndexed[{countIntersectsR[l, #1], #2} &, l];
b = SortBy[a, -#[[1]] &];
Return[Table[l[[b[[i]][[2]][[1]]]], {i, Length[b]}]];
];
(* Utility Functions*)
deb[x_] := (Print["--------"]; Print[x]; Print["---------"];)(* for debug *)
tableForPlot[l_] := (*for plotting*)
Table[{color[l, i], Rectangle[{minX[l, i], minY[l, i]},
{maxX[l, i], maxY[l, i]}]}, {i, Length[l]}];
genList[nonOverlap_, Overlap_] := (* Generate initial lists of rects*)
Module[{alist, blist, a, b},
(alist = (* Generate non overlapping - Tabuloid *)
Table[{{Mod[i, 3], Mod[i, 3] + .8},
{Mod[i, 4], Mod[i, 4] + .8},
rndCol[]}, {i, nonOverlap}];
blist = (* Random overlapping *)
Table[{{a = rnR[3], a + rnR[2]}, {b = rnR[3], b + rnR[2]},
rndCol[]}, {Overlap}];
Return[Join[alist, blist] (* Join both *)];)
];
Main
clist = genList[6, 4]; (* Generate a mix fixed & random set *)
incr = 0.05; (* may be some heuristics needed to determine best increment*)
clist = changeSize[clist,incr]; (* expand rects so that borders does not
touch each other*)
(* Now remove all intercepting rectangles until no more intersections *)
workList = {}; (* the stack*)
While[findMaxIntesections[clist] > 0,
(*Iterate until no intersections *)
clist = sortListByIntersections[clist];
(*Put the most intersected first*)
PrependTo[workList, First[clist]];
(* Push workList with intersected *)
clist = Delete[clist, 1]; (* and Drop it from clist *)
];
(* There are no intersections now, lets pop the stack*)
While [workList != {},
PrependTo[clist, First[workList]];
(*Push first element in front of clist*)
workList = Delete[workList, 1];
(* and Drop it from worklist *)
toMoveIndex = 1;
(*Will move the most intersected Rect*)
g = geometryCenter[clist];
(*so the geom. perception is preserved*)
vectorToMove = rectCenter[clist, toMoveIndex] - g;
If [Norm[vectorToMove] < 0.01, vectorToMove = {1,1}]; (*just in case*)
vectorToMove = vectorToMove/Norm[vectorToMove];
(*to manage step size wisely*)
(*Now iterate finding minimum move first one way, then the other*)
i = 1; (*movement quantity*)
While[countIntersects[clist, toMoveIndex] != 0,
(*If the Rect still intersects*)
(*move it alternating ways (-1)^n *)
clist[[toMoveIndex]][[1]] += (-1)^i i incr vectorToMove[[1]];(*X coords*)
clist[[toMoveIndex]][[2]] += (-1)^i i incr vectorToMove[[2]];(*Y coords*)
i++;
];
];
clist = changeSize[clist, -incr](* restore original sizes*);
HTH!
Edit: Multi-angle searching
I implemented a change in the algorithm allowing to search in all directions, but giving preference to the axis imposed by the geometric symmetry.
At the expense of more cycles, this resulted in more compact final configurations, as you can see here below:
More samples here.
The pseudocode for the main loop changed to:
Expand each rectangle size by a few points to get gaps in final configuration
While There are intersections
sort list of rectangles by number of intersections
push most intersected rectangle on stack, and remove it from list
// Now all remaining rectangles doesn't intersect each other
While stack not empty
find the geometric center G of the chart (each time!)
find the PREFERRED movement vector M (from G to rectangle center)
pop rectangle from stack
With the rectangle
While there are intersections (list+rectangle)
For increasing movement modulus
For increasing angle (0, Pi/4)
rotate vector M expanding the angle alongside M
(* angle, -angle, Pi + angle, Pi-angle*)
re-position the rectangle accorging to M
Re-insert modified vector into list
Shrink the rectangles to its original size
I'm not including the source code for brevity, but just ask for it if you think you can use it. I think that, should you go this way, it's better to switch to R-trees (a lot of interval tests needed here)
Here's a guess.
Find the center C of the bounding box of your rectangles.
For each rectangle R that overlaps another.
Define a movement vector v.
Find all the rectangles R' that overlap R.
Add a vector to v proportional to the vector between the center of R and R'.
Add a vector to v proportional to the vector between C and the center of R.
Move R by v.
Repeat until nothing overlaps.
This incrementally moves the rectangles away from each other and the center of all the rectangles. This will terminate because the component of v from step 4 will eventually spread them out enough all by itself.
I think this solution is quite similar to the one given by cape1232, but it's already implemented, so worth checking out :)
Follow to this reddit discussion: http://www.reddit.com/r/gamedev/comments/1dlwc4/procedural_dungeon_generation_algorithm_explained/ and check out the description and implementation. There's no source code available, so here's my approach to this problem in AS3 (works exactly the same, but keeps rectangles snapped to grid's resolution):
public class RoomSeparator extends AbstractAction {
public function RoomSeparator(name:String = "Room Separator") {
super(name);
}
override public function get finished():Boolean { return _step == 1; }
override public function step():void {
const repelDecayCoefficient:Number = 1.0;
_step = 1;
var count:int = _activeRoomContainer.children.length;
for(var i:int = 0; i < count; i++) {
var room:Room = _activeRoomContainer.children[i];
var center:Vector3D = new Vector3D(room.x + room.width / 2, room.y + room.height / 2);
var velocity:Vector3D = new Vector3D();
for(var j:int = 0; j < count; j++) {
if(i == j)
continue;
var otherRoom:Room = _activeRoomContainer.children[j];
var intersection:Rectangle = GeomUtil.rectangleIntersection(room.createRectangle(), otherRoom.createRectangle());
if(intersection == null || intersection.width == 0 || intersection.height == 0)
continue;
var otherCenter:Vector3D = new Vector3D(otherRoom.x + otherRoom.width / 2, otherRoom.y + otherRoom.height / 2);
var diff:Vector3D = center.subtract(otherCenter);
if(diff.length > 0) {
var scale:Number = repelDecayCoefficient / diff.lengthSquared;
diff.normalize();
diff.scaleBy(scale);
velocity = velocity.add(diff);
}
}
if(velocity.length > 0) {
_step = 0;
velocity.normalize();
room.x += Math.abs(velocity.x) < 0.5 ? 0 : velocity.x > 0 ? _resolution : -_resolution;
room.y += Math.abs(velocity.y) < 0.5 ? 0 : velocity.y > 0 ? _resolution : -_resolution;
}
}
}
}
I really like b005t3r's implementation! It works in my test cases, however my rep is too low to leave a comment with the 2 suggested fixes.
You should not be translating rooms by single resolution increments, you should translate by the velocity you just pain stakingly calculated! This makes the separation more organic as deeply intersected rooms separate more each iteration than not-so-deeply intersecting rooms.
You should not assume velociites less than 0.5 means rooms are separate as you can get stuck in a case where you are never separated. Imagine 2 rooms intersect, but are unable to correct themselves because whenever either one attempts to correct the penetration they calculate the required velocity as < 0.5 so they iterate endlessly.
Here is a Java solution (: Cheers!
do {
_separated = true;
for (Room room : getRooms()) {
// reset for iteration
Vector2 velocity = new Vector2();
Vector2 center = room.createCenter();
for (Room other_room : getRooms()) {
if (room == other_room)
continue;
if (!room.createRectangle().overlaps(other_room.createRectangle()))
continue;
Vector2 other_center = other_room.createCenter();
Vector2 diff = new Vector2(center.x - other_center.x, center.y - other_center.y);
float diff_len2 = diff.len2();
if (diff_len2 > 0f) {
final float repelDecayCoefficient = 1.0f;
float scale = repelDecayCoefficient / diff_len2;
diff.nor();
diff.scl(scale);
velocity.add(diff);
}
}
if (velocity.len2() > 0f) {
_separated = false;
velocity.nor().scl(delta * 20f);
room.getPosition().add(velocity);
}
}
} while (!_separated);
Here's an algorithm written using Java for handling a cluster of unrotated Rectangles. It allows you to specify the desired aspect ratio of the layout and positions the cluster using a parameterised Rectangle as an anchor point, which all translations made are oriented about. You can also specify an arbitrary amount of padding which you'd like to spread the Rectangles by.
public final class BoxxyDistribution {
/* Static Definitions. */
private static final int INDEX_BOUNDS_MINIMUM_X = 0;
private static final int INDEX_BOUNDS_MINIMUM_Y = 1;
private static final int INDEX_BOUNDS_MAXIMUM_X = 2;
private static final int INDEX_BOUNDS_MAXIMUM_Y = 3;
private static final double onCalculateMagnitude(final double pDeltaX, final double pDeltaY) {
return Math.sqrt((pDeltaX * pDeltaX) + (pDeltaY + pDeltaY));
}
/* Updates the members of EnclosingBounds to ensure the dimensions of T can be completely encapsulated. */
private static final void onEncapsulateBounds(final double[] pEnclosingBounds, final double pMinimumX, final double pMinimumY, final double pMaximumX, final double pMaximumY) {
pEnclosingBounds[0] = Math.min(pEnclosingBounds[BoxxyDistribution.INDEX_BOUNDS_MINIMUM_X], pMinimumX);
pEnclosingBounds[1] = Math.min(pEnclosingBounds[BoxxyDistribution.INDEX_BOUNDS_MINIMUM_Y], pMinimumY);
pEnclosingBounds[2] = Math.max(pEnclosingBounds[BoxxyDistribution.INDEX_BOUNDS_MAXIMUM_X], pMaximumX);
pEnclosingBounds[3] = Math.max(pEnclosingBounds[BoxxyDistribution.INDEX_BOUNDS_MAXIMUM_Y], pMaximumY);
}
private static final void onEncapsulateBounds(final double[] pEnclosingBounds, final double[] pBounds) {
BoxxyDistribution.onEncapsulateBounds(pEnclosingBounds, pBounds[BoxxyDistribution.INDEX_BOUNDS_MINIMUM_X], pBounds[BoxxyDistribution.INDEX_BOUNDS_MINIMUM_Y], pBounds[BoxxyDistribution.INDEX_BOUNDS_MAXIMUM_X], pBounds[BoxxyDistribution.INDEX_BOUNDS_MAXIMUM_Y]);
}
private static final double onCalculateMidpoint(final double pMaximum, final double pMinimum) {
return ((pMaximum - pMinimum) * 0.5) + pMinimum;
}
/* Re-arranges a List of Rectangles into something aesthetically pleasing. */
public static final void onBoxxyDistribution(final List<Rectangle> pRectangles, final Rectangle pAnchor, final double pPadding, final double pAspectRatio, final float pRowFillPercentage) {
/* Create a safe clone of the Rectangles that we can modify as we please. */
final List<Rectangle> lRectangles = new ArrayList<Rectangle>(pRectangles);
/* Allocate a List to track the bounds of each Row. */
final List<double[]> lRowBounds = new ArrayList<double[]>(); // (MinX, MinY, MaxX, MaxY)
/* Ensure Rectangles does not contain the Anchor. */
lRectangles.remove(pAnchor);
/* Order the Rectangles via their proximity to the Anchor. */
Collections.sort(pRectangles, new Comparator<Rectangle>(){ #Override public final int compare(final Rectangle pT0, final Rectangle pT1) {
/* Calculate the Distance for pT0. */
final double lDistance0 = BoxxyDistribution.onCalculateMagnitude(pAnchor.getCenterX() - pT0.getCenterX(), pAnchor.getCenterY() - pT0.getCenterY());
final double lDistance1 = BoxxyDistribution.onCalculateMagnitude(pAnchor.getCenterX() - pT1.getCenterX(), pAnchor.getCenterY() - pT1.getCenterY());
/* Compare the magnitude in distance between the anchor and the Rectangles. */
return Double.compare(lDistance0, lDistance1);
} });
/* Initialize the RowBounds using the Anchor. */ /** TODO: Probably better to call getBounds() here. **/
lRowBounds.add(new double[]{ pAnchor.getX(), pAnchor.getY(), pAnchor.getX() + pAnchor.getWidth(), pAnchor.getY() + pAnchor.getHeight() });
/* Allocate a variable for tracking the TotalBounds of all rows. */
final double[] lTotalBounds = new double[]{ Double.POSITIVE_INFINITY, Double.POSITIVE_INFINITY, Double.NEGATIVE_INFINITY, Double.NEGATIVE_INFINITY };
/* Now we iterate the Rectangles to place them optimally about the Anchor. */
for(int i = 0; i < lRectangles.size(); i++) {
/* Fetch the Rectangle. */
final Rectangle lRectangle = lRectangles.get(i);
/* Iterate through each Row. */
for(final double[] lBounds : lRowBounds) {
/* Update the TotalBounds. */
BoxxyDistribution.onEncapsulateBounds(lTotalBounds, lBounds);
}
/* Allocate a variable to state whether the Rectangle has been allocated a suitable RowBounds. */
boolean lIsBounded = false;
/* Calculate the AspectRatio. */
final double lAspectRatio = (lTotalBounds[BoxxyDistribution.INDEX_BOUNDS_MAXIMUM_X] - lTotalBounds[BoxxyDistribution.INDEX_BOUNDS_MINIMUM_X]) / (lTotalBounds[BoxxyDistribution.INDEX_BOUNDS_MAXIMUM_Y] - lTotalBounds[BoxxyDistribution.INDEX_BOUNDS_MINIMUM_Y]);
/* We will now iterate through each of the available Rows to determine if a Rectangle can be stored. */
for(int j = 0; j < lRowBounds.size() && !lIsBounded; j++) {
/* Fetch the Bounds. */
final double[] lBounds = lRowBounds.get(j);
/* Calculate the width and height of the Bounds. */
final double lWidth = lBounds[BoxxyDistribution.INDEX_BOUNDS_MAXIMUM_X] - lBounds[BoxxyDistribution.INDEX_BOUNDS_MINIMUM_X];
final double lHeight = lBounds[BoxxyDistribution.INDEX_BOUNDS_MAXIMUM_Y] - lBounds[BoxxyDistribution.INDEX_BOUNDS_MINIMUM_Y];
/* Determine whether the Rectangle is suitable to fit in the RowBounds. */
if(lRectangle.getHeight() <= lHeight && !(lAspectRatio > pAspectRatio && lWidth > pRowFillPercentage * (lTotalBounds[BoxxyDistribution.INDEX_BOUNDS_MAXIMUM_X] - lTotalBounds[BoxxyDistribution.INDEX_BOUNDS_MINIMUM_X]))) {
/* Register that the Rectangle IsBounded. */
lIsBounded = true;
/* Update the Rectangle's X and Y Co-ordinates. */
lRectangle.setFrame((lRectangle.getX() > BoxxyDistribution.onCalculateMidpoint(lBounds[BoxxyDistribution.INDEX_BOUNDS_MAXIMUM_X], lBounds[BoxxyDistribution.INDEX_BOUNDS_MINIMUM_X])) ? lBounds[BoxxyDistribution.INDEX_BOUNDS_MAXIMUM_X] + pPadding : lBounds[BoxxyDistribution.INDEX_BOUNDS_MINIMUM_X] - (pPadding + lRectangle.getWidth()), lBounds[1], lRectangle.getWidth(), lRectangle.getHeight());
/* Update the Bounds. (Do not modify the vertical metrics.) */
BoxxyDistribution.onEncapsulateBounds(lTotalBounds, lRectangle.getX(), lBounds[BoxxyDistribution.INDEX_BOUNDS_MINIMUM_Y], lRectangle.getX() + lRectangle.getWidth(), lBounds[BoxxyDistribution.INDEX_BOUNDS_MINIMUM_Y] + lHeight);
}
}
/* Determine if the Rectangle has not been allocated a Row. */
if(!lIsBounded) {
/* Calculate the MidPoint of the TotalBounds. */
final double lCentreY = BoxxyDistribution.onCalculateMidpoint(lTotalBounds[BoxxyDistribution.INDEX_BOUNDS_MAXIMUM_Y], lTotalBounds[BoxxyDistribution.INDEX_BOUNDS_MINIMUM_Y]);
/* Determine whether to place the bounds above or below? */
final double lYPosition = lRectangle.getY() < lCentreY ? lTotalBounds[BoxxyDistribution.INDEX_BOUNDS_MINIMUM_Y] - (pPadding + lRectangle.getHeight()) : (lTotalBounds[BoxxyDistribution.INDEX_BOUNDS_MAXIMUM_Y] + pPadding);
/* Create a new RowBounds. */
final double[] lBounds = new double[]{ pAnchor.getX(), lYPosition, pAnchor.getX() + lRectangle.getWidth(), lYPosition + lRectangle.getHeight() };
/* Allocate a new row, roughly positioned about the anchor. */
lRowBounds.add(lBounds);
/* Position the Rectangle. */
lRectangle.setFrame(lBounds[BoxxyDistribution.INDEX_BOUNDS_MINIMUM_X], lBounds[BoxxyDistribution.INDEX_BOUNDS_MINIMUM_Y], lRectangle.getWidth(), lRectangle.getHeight());
}
}
}
}
Here's an example using an AspectRatio of 1.2, a FillPercentage of 0.8 and a Padding of 10.0.
This is a deterministic approach which allows spacing to occur around the anchor whilst leaving the location of the anchor itself unchanged. This allows the layout to occur around wherever the user's Point of Interest is. The logic for selecting a position is pretty simplistic, but I think the surrounding architecture of sorting the elements based upon their initial position and then iterating them is a useful approach for implementing a relatively predictable distribution. Plus we're not relying on iterative intersection tests or anything like that, just building up some bounding boxes to give us a broad indication of where to align things; after this, applying padding just comes kind of naturally.
Here is a version that takes cape1232's answer and is a standalone runnable example for Java:
public class Rectangles extends JPanel {
List<Rectangle2D> rectangles = new ArrayList<Rectangle2D>();
{
// x,y,w,h
rectangles.add(new Rectangle2D.Float(300, 50, 50, 50));
rectangles.add(new Rectangle2D.Float(300, 50, 20, 50));
rectangles.add(new Rectangle2D.Float(100, 100, 100, 50));
rectangles.add(new Rectangle2D.Float(120, 200, 50, 50));
rectangles.add(new Rectangle2D.Float(150, 130, 100, 100));
rectangles.add(new Rectangle2D.Float(0, 100, 100, 50));
for (int i = 0; i < 10; i++) {
for (int j = 0; j < 10; j++) {
rectangles.add(new Rectangle2D.Float(i * 40, j * 40, 20, 20));
}
}
}
List<Rectangle2D> rectanglesToDraw;
protected void reset() {
rectanglesToDraw = rectangles;
this.repaint();
}
private List<Rectangle2D> findIntersections(Rectangle2D rect, List<Rectangle2D> rectList) {
ArrayList<Rectangle2D> intersections = new ArrayList<Rectangle2D>();
for (Rectangle2D intersectingRect : rectList) {
if (!rect.equals(intersectingRect) && intersectingRect.intersects(rect)) {
intersections.add(intersectingRect);
}
}
return intersections;
}
protected void fix() {
rectanglesToDraw = new ArrayList<Rectangle2D>();
for (Rectangle2D rect : rectangles) {
Rectangle2D copyRect = new Rectangle2D.Double();
copyRect.setRect(rect);
rectanglesToDraw.add(copyRect);
}
// Find the center C of the bounding box of your rectangles.
Rectangle2D surroundRect = surroundingRect(rectanglesToDraw);
Point center = new Point((int) surroundRect.getCenterX(), (int) surroundRect.getCenterY());
int movementFactor = 5;
boolean hasIntersections = true;
while (hasIntersections) {
hasIntersections = false;
for (Rectangle2D rect : rectanglesToDraw) {
// Find all the rectangles R' that overlap R.
List<Rectangle2D> intersectingRects = findIntersections(rect, rectanglesToDraw);
if (intersectingRects.size() > 0) {
// Define a movement vector v.
Point movementVector = new Point(0, 0);
Point centerR = new Point((int) rect.getCenterX(), (int) rect.getCenterY());
// For each rectangle R that overlaps another.
for (Rectangle2D rPrime : intersectingRects) {
Point centerRPrime = new Point((int) rPrime.getCenterX(), (int) rPrime.getCenterY());
int xTrans = (int) (centerR.getX() - centerRPrime.getX());
int yTrans = (int) (centerR.getY() - centerRPrime.getY());
// Add a vector to v proportional to the vector between the center of R and R'.
movementVector.translate(xTrans < 0 ? -movementFactor : movementFactor,
yTrans < 0 ? -movementFactor : movementFactor);
}
int xTrans = (int) (centerR.getX() - center.getX());
int yTrans = (int) (centerR.getY() - center.getY());
// Add a vector to v proportional to the vector between C and the center of R.
movementVector.translate(xTrans < 0 ? -movementFactor : movementFactor,
yTrans < 0 ? -movementFactor : movementFactor);
// Move R by v.
rect.setRect(rect.getX() + movementVector.getX(), rect.getY() + movementVector.getY(),
rect.getWidth(), rect.getHeight());
// Repeat until nothing overlaps.
hasIntersections = true;
}
}
}
this.repaint();
}
private Rectangle2D surroundingRect(List<Rectangle2D> rectangles) {
Point topLeft = null;
Point bottomRight = null;
for (Rectangle2D rect : rectangles) {
if (topLeft == null) {
topLeft = new Point((int) rect.getMinX(), (int) rect.getMinY());
} else {
if (rect.getMinX() < topLeft.getX()) {
topLeft.setLocation((int) rect.getMinX(), topLeft.getY());
}
if (rect.getMinY() < topLeft.getY()) {
topLeft.setLocation(topLeft.getX(), (int) rect.getMinY());
}
}
if (bottomRight == null) {
bottomRight = new Point((int) rect.getMaxX(), (int) rect.getMaxY());
} else {
if (rect.getMaxX() > bottomRight.getX()) {
bottomRight.setLocation((int) rect.getMaxX(), bottomRight.getY());
}
if (rect.getMaxY() > bottomRight.getY()) {
bottomRight.setLocation(bottomRight.getX(), (int) rect.getMaxY());
}
}
}
return new Rectangle2D.Double(topLeft.getX(), topLeft.getY(), bottomRight.getX() - topLeft.getX(),
bottomRight.getY() - topLeft.getY());
}
public void paintComponent(Graphics g) {
super.paintComponent(g);
Graphics2D g2d = (Graphics2D) g;
for (Rectangle2D entry : rectanglesToDraw) {
g2d.setStroke(new BasicStroke(1));
// g2d.fillRect((int) entry.getX(), (int) entry.getY(), (int) entry.getWidth(),
// (int) entry.getHeight());
g2d.draw(entry);
}
}
protected static void createAndShowGUI() {
Rectangles rects = new Rectangles();
rects.reset();
JFrame frame = new JFrame("Rectangles");
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.setLayout(new BorderLayout());
frame.add(rects, BorderLayout.CENTER);
JPanel buttonsPanel = new JPanel();
JButton fix = new JButton("Fix");
fix.addActionListener(new ActionListener() {
#Override
public void actionPerformed(ActionEvent e) {
rects.fix();
}
});
JButton resetButton = new JButton("Reset");
resetButton.addActionListener(new ActionListener() {
#Override
public void actionPerformed(ActionEvent e) {
rects.reset();
}
});
buttonsPanel.add(fix);
buttonsPanel.add(resetButton);
frame.add(buttonsPanel, BorderLayout.SOUTH);
frame.setSize(400, 400);
frame.setLocationRelativeTo(null);
frame.setVisible(true);
}
public static void main(String[] args) {
SwingUtilities.invokeLater(new Runnable() {
#Override
public void run() {
createAndShowGUI();
}
});
}
}

Compute the area of intersection between a circle and a triangle?

How does one compute the area of intersection between a triangle (specified as three (X,Y) pairs) and a circle (X,Y,R)? I've done some searching to no avail. This is for work, not school. :)
It would look something like this in C#:
struct { PointF vert[3]; } Triangle;
struct { PointF center; float radius; } Circle;
// returns the area of intersection, e.g.:
// if the circle contains the triangle, return area of triangle
// if the triangle contains the circle, return area of circle
// if partial intersection, figure that out
// if no intersection, return 0
double AreaOfIntersection(Triangle t, Circle c)
{
...
}
First I will remind us how to find the area of a polygon. Once we have done this, the algorithm to find the intersection between a polygon and a circle should be easy to understand.
How to find the area of a polygon
Let's look at the case of a triangle, because all the essential logic appears there. Let's assume we have a triangle with vertices (x1,y1), (x2,y2), and (x3,y3) as you go around the triangle counter-clockwise, as shown in the following figure:
Then you can compute the area by the formula
A=(x1 y2 + x2 y3 + x3 y1 - x2y1- x3 y2 - x1y3)/2.
To see why this formula works, let's rearrange it so it is in the form
A=(x1 y2 - x2 y1)/2 + (x2 y3 - x3 y2)/2 + (x3 y1 - x1y3 )/2.
Now the first term is the following area, which is positive in our case:
If it isn't clear that the area of the green region is indeed (x1 y2 - x2 y1)/2, then read this.
The second term is this area, which is again positive:
And the third area is shown in the following figure. This time the area is negative
Adding these three up we get the following picture
We see that the green area that was outside the triangle is cancelled by the red area, so that the net area is just the area of the triangle, and this shows why our formula was true in this case.
What I said above was the intuitive explanation as to why the area formula was correct. A more rigorous explanation would be to observe that when we calculate the area from an edge, the area we get is the same area we would get from integration r^2dθ/2, so we are effectively integrating r^2dθ/2 around the boundary of the polygon, and by stokes theorem, this gives the same result as integrating rdrdθ over the region bounded the polygon. Since integrating rdrdθ over the region bounded by the polygon gives the area, we conclude that our procedure must correctly give the area.
Area of the intersection of a circle with a polygon
Now let's discuss how to find the area of the intersection of a circle of radius R with a polygon as show in the following figure:
We are interested in find the area of the green region. We may, just as in the case of the single polygon, break our calculation into finding an area for each side of the polygon, and then add those areas up.
Our first area will look like:
The second area will look like
And the third area will be
Again, the first two areas are positive in our case while the third one will be negative. Hopefully the cancellations will work out so that the net area is indeed the area we are interested in. Let's see.
Indeed the sum of the areas will be area we are interested in.
Again, we can give a more rigorous explanation of why this works. Let I be the region defined by the intersection and let P be the polygon. Then from the previous discussion, we know that we want to computer the integral of r^2dθ/2 around the boundary of I. However, this difficult to do because it requires finding the intersection.
Instead we did an integral over the polygon. We integrated max(r,R)^2 dθ/2 over the boundary of the polygon. To see why this gives the right answer, let's define a function π which takes a point in polar coordinates (r,θ) to the point (max(r,R),θ). It shouldn't be confusing to refer to the the coordinate functions of π(r)=max(r,R) and π(θ)=θ. Then what we did was to integrate π(r)^2 dθ/2 over the boundary of the polygon.
On the other hand since π(θ)=θ, this is the same as integrating π(r)^2 dπ(θ)/2 over the boundary of the polygon.
Now doing a change of variable, we find that we would get the same answer if we integrated r^2 dθ/2 over the boundary of π(P), where π(P) is the image of P under π.
Using Stokes theorem again we know that integrating r^2 dθ/2 over the boundary of π(P) gives us the area of π(P). In other words it gives the same answer as integrating dxdy over π(P).
Using a change of variable again, we know that integrating dxdy over π(P) is the same as integrating Jdxdy over P, where J is the jacobian of π.
Now we can split the integral of Jdxdy into two regions: the part in the circle and the part outside the circle. Now π leaves points in the circle alone so J=1 there, so the contribution from this part of P is the area of the part of P that lies in the circle i.e., the area of the intersection. The second region is the region outside the circle. There J=0 since π collapses this part down to the boundary of the circle.
Thus what we compute is indeed the area of the intersection.
Now that we are relatively sure we know conceptually how to find the area, let's talk more specifically about how to compute the contribution from a single segment. Let's start by looking at a segment in what I will call "standard geometry". It is shown below.
In standard geometry, the edge goes horizontally from left to right. It is described by three numbers: xi, the x-coordinate where the edge starts, xf, the x-coordinate where the edge ends, and y, the y coordinate of the edge.
Now we see that if |y| < R, as in the figure, then the edge will intersect the circle at the points (-xint,y) and (xint,y) where xint = (R^2-y^2)^(1/2). Then the area we need to calculate is broken up into three pieces labelled in the figure. To get the areas of regions 1 and 3, we can use arctan to get the angles of the various points and then equate the area to R^2 Δθ/2. So for example we would set θi = atan2(y,xi) and θl = atan2(y,-xint). Then the area of region one is R^2 (θl-θi)/2. We can obtain the area of region 3 similarly.
The area of region 2 is just the area of a triangle. However, we must be careful about sign. We want the area shown to be positive so we will say the area is -(xint - (-xint))y/2.
Another thing to keep in mind is that in general, xi does not have to be less than -xint and xf does not have to be greater than xint.
The other case to consider is |y| > R. This case is simpler, because there is only one piece which is similar to region 1 in the figure.
Now that we know how to compute the area from an edge in standard geometry, the only thing left to do is describe how to transform any edge into standard geometry.
But this just a simple change of coordinates. Given some with initial vertex vi and final vertex vf, the new x unit vector will be the unit vector pointing from vi to vf. Then xi is just the displacement of vi from the center of the circle dotted into x, and xf is just xi plus the distance between vi and vf. Meanwhile y is given by the wedge product of x with the displacement of vi from the center of the circle.
Code
That completes the description of the algorithm, now it is time to write some code. I will use java.
First off, since we are working with circles, we should have a circle class
public class Circle {
final Point2D center;
final double radius;
public Circle(double x, double y, double radius) {
center = new Point2D.Double(x, y);
this.radius = radius;
}
public Circle(Point2D.Double center, double radius) {
this(center.getX(), center.getY(), radius);
}
public Point2D getCenter() {
return new Point2D.Double(getCenterX(), getCenterY());
}
public double getCenterX() {
return center.getX();
}
public double getCenterY() {
return center.getY();
}
public double getRadius() {
return radius;
}
}
For polygons, I will use java's Shape class. Shapes have a PathIterator that I can use to iterate through the edges of the polygon.
Now for the actual work. I will separate the logic of iterating through the edges, putting the edges in standard geometry etc, from the logic of computing the area once this is done. The reason for this is that you may in the future want to compute something else besides or in addition to the area and you want to be able to reuse the code having to deal with iterating through the edges.
So I have a generic class which computes some property of class T about our polygon circle intersection.
public abstract class CircleShapeIntersectionFinder<T> {
It has three static methods that just help compute geometry:
private static double[] displacment2D(final double[] initialPoint, final double[] finalPoint) {
return new double[]{finalPoint[0] - initialPoint[0], finalPoint[1] - initialPoint[1]};
}
private static double wedgeProduct2D(final double[] firstFactor, final double[] secondFactor) {
return firstFactor[0] * secondFactor[1] - firstFactor[1] * secondFactor[0];
}
static private double dotProduct2D(final double[] firstFactor, final double[] secondFactor) {
return firstFactor[0] * secondFactor[0] + firstFactor[1] * secondFactor[1];
}
There are two instance fields, a Circle which just keeps a copy of the circle, and the currentSquareRadius, which keeps a copy of the square radius. This may seem odd, but the class I am using is actually equipped to find the areas of a whole collection of circle-polygon intersections. That is why I am referring to one of the circles as "current".
private Circle currentCircle;
private double currentSquareRadius;
Next comes the method for computing what we want to compute:
public final T computeValue(Circle circle, Shape shape) {
initialize();
processCircleShape(circle, shape);
return getValue();
}
initialize() and getValue() are abstract. initialize() would set the variable that is keeping a total of the area to zero, and getValue() would just return the area. The definition for processCircleShape is
private void processCircleShape(Circle circle, final Shape cellBoundaryPolygon) {
initializeForNewCirclePrivate(circle);
if (cellBoundaryPolygon == null) {
return;
}
PathIterator boundaryPathIterator = cellBoundaryPolygon.getPathIterator(null);
double[] firstVertex = new double[2];
double[] oldVertex = new double[2];
double[] newVertex = new double[2];
int segmentType = boundaryPathIterator.currentSegment(firstVertex);
if (segmentType != PathIterator.SEG_MOVETO) {
throw new AssertionError();
}
System.arraycopy(firstVertex, 0, newVertex, 0, 2);
boundaryPathIterator.next();
System.arraycopy(newVertex, 0, oldVertex, 0, 2);
segmentType = boundaryPathIterator.currentSegment(newVertex);
while (segmentType != PathIterator.SEG_CLOSE) {
processSegment(oldVertex, newVertex);
boundaryPathIterator.next();
System.arraycopy(newVertex, 0, oldVertex, 0, 2);
segmentType = boundaryPathIterator.currentSegment(newVertex);
}
processSegment(newVertex, firstVertex);
}
Let's take a second to look at initializeForNewCirclePrivate quickly. This method just sets the instance fields and allows the derived class to store any property of the circle. Its definition is
private void initializeForNewCirclePrivate(Circle circle) {
currentCircle = circle;
currentSquareRadius = currentCircle.getRadius() * currentCircle.getRadius();
initializeForNewCircle(circle);
}
initializeForNewCircle is abstract and one implementation would be for it to store the circles radius to avoid having to do square roots. Anyway back to processCircleShape. After calling initializeForNewCirclePrivate, we check if the polygon is null (which I am interpreting as an empty polygon), and we return if it is null. In this case, our computed area would be zero. If the polygon is not null then we get the PathIterator of the polygon. The argument to the getPathIterator method I call is an affine transformation that can be applied to the path. I don't want to apply one though, so I just pass null.
Next I declare the double[]s that will keep track of the vertices. I must remember the first vertex because the PathIterator only gives me each vertex once, so I have to go back after it has given me the last vertex, and form an edge with this last vertex and the first vertex.
The currentSegment method on the next line puts the next vertex in its argument. It returns a code that tells you when it is out of vertices. This is why the control expression for my while loop is what it is.
Most of the rest of the code of this method is uninteresting logic related to iterating through the vertices. The important thing is that once per iteration of the while loop I call processSegment and then I call processSegment again at the end of the method to process the edge that connects the last vertex to the first vertex.
Let's look at the code for processSegment:
private void processSegment(double[] initialVertex, double[] finalVertex) {
double[] segmentDisplacement = displacment2D(initialVertex, finalVertex);
if (segmentDisplacement[0] == 0 && segmentDisplacement[1] == 0) {
return;
}
double segmentLength = Math.sqrt(dotProduct2D(segmentDisplacement, segmentDisplacement));
double[] centerToInitialDisplacement = new double[]{initialVertex[0] - getCurrentCircle().getCenterX(), initialVertex[1] - getCurrentCircle().getCenterY()};
final double leftX = dotProduct2D(centerToInitialDisplacement, segmentDisplacement) / segmentLength;
final double rightX = leftX + segmentLength;
final double y = wedgeProduct2D(segmentDisplacement, centerToInitialDisplacement) / segmentLength;
processSegmentStandardGeometry(leftX, rightX, y);
}
In this method I implement the steps to transform an edge into the standard geometry as described above. First I calculate segmentDisplacement, the displacement from the initial vertex to the final vertex. This defines the x axis of the standard geometry. I do an early return if this displacement is zero.
Next I calculate the length of the displacement, because this is necessary to get the x unit vector. Once I have this information, I calculate the displacement from the center of the circle to the initial vertex. The dot product of this with segmentDisplacement gives me leftX which I had been calling xi. Then rightX, which I had been calling xf, is just leftX + segmentLength. Finally I do the wedge product to get y as described above.
Now that I have transformed the problem into the standard geometry, it will be easy to deal with. That is what the processSegmentStandardGeometry method does. Let's look at the code
private void processSegmentStandardGeometry(double leftX, double rightX, double y) {
if (y * y > getCurrentSquareRadius()) {
processNonIntersectingRegion(leftX, rightX, y);
} else {
final double intersectionX = Math.sqrt(getCurrentSquareRadius() - y * y);
if (leftX < -intersectionX) {
final double leftRegionRightEndpoint = Math.min(-intersectionX, rightX);
processNonIntersectingRegion(leftX, leftRegionRightEndpoint, y);
}
if (intersectionX < rightX) {
final double rightRegionLeftEndpoint = Math.max(intersectionX, leftX);
processNonIntersectingRegion(rightRegionLeftEndpoint, rightX, y);
}
final double middleRegionLeftEndpoint = Math.max(-intersectionX, leftX);
final double middleRegionRightEndpoint = Math.min(intersectionX, rightX);
final double middleRegionLength = Math.max(middleRegionRightEndpoint - middleRegionLeftEndpoint, 0);
processIntersectingRegion(middleRegionLength, y);
}
}
The first if distinguishes the cases where y is small enough that the edge may intersect the circle. If y is big and there is no possibility of intersection, then I call the method to handle that case. Otherwise I handle the case where intersection is possible.
If intersection is possible, I calculate the x coordinate of intersection, intersectionX, and I divide the edge up into three portions, which correspond to regions 1, 2, and 3 of the standard geometry figure above. First I handle region 1.
To handle region 1, I check if leftX is indeed less than -intersectionX for otherwise there would be no region 1. If there is a region 1, then I need to know when it ends. It ends at the minimum of rightX and -intersectionX. After I have found these x-coordinates, I deal with this non-intersection region.
I do a similar thing to handle region 3.
For region 2, I have to do some logic to check that leftX and rightX do actually bracket some region in between -intersectionX and intersectionX. After finding the region, I only need the length of the region and y, so I pass these two numbers on to an abstract method which handles the region 2.
Now let's look at the code for processNonIntersectingRegion
private void processNonIntersectingRegion(double leftX, double rightX, double y) {
final double initialTheta = Math.atan2(y, leftX);
final double finalTheta = Math.atan2(y, rightX);
double deltaTheta = finalTheta - initialTheta;
if (deltaTheta < -Math.PI) {
deltaTheta += 2 * Math.PI;
} else if (deltaTheta > Math.PI) {
deltaTheta -= 2 * Math.PI;
}
processNonIntersectingRegion(deltaTheta);
}
I simply use atan2 to calculate the difference in angle between leftX and rightX. Then I add code to deal with the discontinuity in atan2, but this is probably unnecessary, because the discontinuity occurs either at 180 degrees or 0 degrees. Then I pass the difference in angle onto an abstract method. Lastly we just have abstract methods and getters:
protected abstract void initialize();
protected abstract void initializeForNewCircle(Circle circle);
protected abstract void processNonIntersectingRegion(double deltaTheta);
protected abstract void processIntersectingRegion(double length, double y);
protected abstract T getValue();
protected final Circle getCurrentCircle() {
return currentCircle;
}
protected final double getCurrentSquareRadius() {
return currentSquareRadius;
}
}
Now let's look at the extending class, CircleAreaFinder
public class CircleAreaFinder extends CircleShapeIntersectionFinder<Double> {
public static double findAreaOfCircle(Circle circle, Shape shape) {
CircleAreaFinder circleAreaFinder = new CircleAreaFinder();
return circleAreaFinder.computeValue(circle, shape);
}
double area;
#Override
protected void initialize() {
area = 0;
}
#Override
protected void processNonIntersectingRegion(double deltaTheta) {
area += getCurrentSquareRadius() * deltaTheta / 2;
}
#Override
protected void processIntersectingRegion(double length, double y) {
area -= length * y / 2;
}
#Override
protected Double getValue() {
return area;
}
#Override
protected void initializeForNewCircle(Circle circle) {
}
}
It has a field area to keep track of the area. initialize sets area to zero, as expected. When we process a non intersecting edge, we increment the area by R^2 Δθ/2 as we concluded we should above. For an intersecting edge, we decrement the area by y*length/2. This was so that negative values for y correspond to positive areas, as we decided they should.
Now the neat thing is if we want to keep track of the perimeter we don't have to do that much more work. I defined an AreaPerimeter class:
public class AreaPerimeter {
final double area;
final double perimeter;
public AreaPerimeter(double area, double perimeter) {
this.area = area;
this.perimeter = perimeter;
}
public double getArea() {
return area;
}
public double getPerimeter() {
return perimeter;
}
}
and now we just need to extend our abstract class again using AreaPerimeter as the type.
public class CircleAreaPerimeterFinder extends CircleShapeIntersectionFinder<AreaPerimeter> {
public static AreaPerimeter findAreaPerimeterOfCircle(Circle circle, Shape shape) {
CircleAreaPerimeterFinder circleAreaPerimeterFinder = new CircleAreaPerimeterFinder();
return circleAreaPerimeterFinder.computeValue(circle, shape);
}
double perimeter;
double radius;
CircleAreaFinder circleAreaFinder;
#Override
protected void initialize() {
perimeter = 0;
circleAreaFinder = new CircleAreaFinder();
}
#Override
protected void initializeForNewCircle(Circle circle) {
radius = Math.sqrt(getCurrentSquareRadius());
}
#Override
protected void processNonIntersectingRegion(double deltaTheta) {
perimeter += deltaTheta * radius;
circleAreaFinder.processNonIntersectingRegion(deltaTheta);
}
#Override
protected void processIntersectingRegion(double length, double y) {
perimeter += Math.abs(length);
circleAreaFinder.processIntersectingRegion(length, y);
}
#Override
protected AreaPerimeter getValue() {
return new AreaPerimeter(circleAreaFinder.getValue(), perimeter);
}
}
We have a variable perimeter to keep track of the perimeter, we remember the value of the radius to avoid have to call Math.sqrt a lot, and we delegate the calculation of the area to our CircleAreaFinder. We can see that the formulas for the perimeter are easy.
For reference here is the full code of CircleShapeIntersectionFinder
private static double[] displacment2D(final double[] initialPoint, final double[] finalPoint) {
return new double[]{finalPoint[0] - initialPoint[0], finalPoint[1] - initialPoint[1]};
}
private static double wedgeProduct2D(final double[] firstFactor, final double[] secondFactor) {
return firstFactor[0] * secondFactor[1] - firstFactor[1] * secondFactor[0];
}
static private double dotProduct2D(final double[] firstFactor, final double[] secondFactor) {
return firstFactor[0] * secondFactor[0] + firstFactor[1] * secondFactor[1];
}
private Circle currentCircle;
private double currentSquareRadius;
public final T computeValue(Circle circle, Shape shape) {
initialize();
processCircleShape(circle, shape);
return getValue();
}
private void processCircleShape(Circle circle, final Shape cellBoundaryPolygon) {
initializeForNewCirclePrivate(circle);
if (cellBoundaryPolygon == null) {
return;
}
PathIterator boundaryPathIterator = cellBoundaryPolygon.getPathIterator(null);
double[] firstVertex = new double[2];
double[] oldVertex = new double[2];
double[] newVertex = new double[2];
int segmentType = boundaryPathIterator.currentSegment(firstVertex);
if (segmentType != PathIterator.SEG_MOVETO) {
throw new AssertionError();
}
System.arraycopy(firstVertex, 0, newVertex, 0, 2);
boundaryPathIterator.next();
System.arraycopy(newVertex, 0, oldVertex, 0, 2);
segmentType = boundaryPathIterator.currentSegment(newVertex);
while (segmentType != PathIterator.SEG_CLOSE) {
processSegment(oldVertex, newVertex);
boundaryPathIterator.next();
System.arraycopy(newVertex, 0, oldVertex, 0, 2);
segmentType = boundaryPathIterator.currentSegment(newVertex);
}
processSegment(newVertex, firstVertex);
}
private void initializeForNewCirclePrivate(Circle circle) {
currentCircle = circle;
currentSquareRadius = currentCircle.getRadius() * currentCircle.getRadius();
initializeForNewCircle(circle);
}
private void processSegment(double[] initialVertex, double[] finalVertex) {
double[] segmentDisplacement = displacment2D(initialVertex, finalVertex);
if (segmentDisplacement[0] == 0 && segmentDisplacement[1] == 0) {
return;
}
double segmentLength = Math.sqrt(dotProduct2D(segmentDisplacement, segmentDisplacement));
double[] centerToInitialDisplacement = new double[]{initialVertex[0] - getCurrentCircle().getCenterX(), initialVertex[1] - getCurrentCircle().getCenterY()};
final double leftX = dotProduct2D(centerToInitialDisplacement, segmentDisplacement) / segmentLength;
final double rightX = leftX + segmentLength;
final double y = wedgeProduct2D(segmentDisplacement, centerToInitialDisplacement) / segmentLength;
processSegmentStandardGeometry(leftX, rightX, y);
}
private void processSegmentStandardGeometry(double leftX, double rightX, double y) {
if (y * y > getCurrentSquareRadius()) {
processNonIntersectingRegion(leftX, rightX, y);
} else {
final double intersectionX = Math.sqrt(getCurrentSquareRadius() - y * y);
if (leftX < -intersectionX) {
final double leftRegionRightEndpoint = Math.min(-intersectionX, rightX);
processNonIntersectingRegion(leftX, leftRegionRightEndpoint, y);
}
if (intersectionX < rightX) {
final double rightRegionLeftEndpoint = Math.max(intersectionX, leftX);
processNonIntersectingRegion(rightRegionLeftEndpoint, rightX, y);
}
final double middleRegionLeftEndpoint = Math.max(-intersectionX, leftX);
final double middleRegionRightEndpoint = Math.min(intersectionX, rightX);
final double middleRegionLength = Math.max(middleRegionRightEndpoint - middleRegionLeftEndpoint, 0);
processIntersectingRegion(middleRegionLength, y);
}
}
private void processNonIntersectingRegion(double leftX, double rightX, double y) {
final double initialTheta = Math.atan2(y, leftX);
final double finalTheta = Math.atan2(y, rightX);
double deltaTheta = finalTheta - initialTheta;
if (deltaTheta < -Math.PI) {
deltaTheta += 2 * Math.PI;
} else if (deltaTheta > Math.PI) {
deltaTheta -= 2 * Math.PI;
}
processNonIntersectingRegion(deltaTheta);
}
protected abstract void initialize();
protected abstract void initializeForNewCircle(Circle circle);
protected abstract void processNonIntersectingRegion(double deltaTheta);
protected abstract void processIntersectingRegion(double length, double y);
protected abstract T getValue();
protected final Circle getCurrentCircle() {
return currentCircle;
}
protected final double getCurrentSquareRadius() {
return currentSquareRadius;
}
Anyway, that is my description of the algorithm. I think it is nice because it is exact and there aren't really that many cases to check.
If you want an exact solution (or at least as exact as you can get using floating-point arithmetic) then this is going to involve a lot of legwork, because there are so many cases to consider.
I count nine different cases (categorized in the figure below by the number of vertices of the triangle inside the circle, and the number of edges of the triangle that intersect or are contained in the circle):
(However, this kind of enumeration of geometric cases is well known to be tricky, and it wouldn't surprise me at all if I missed one or two!)
So the approach is:
Determine for each vertex of the triangle if it's inside the circle. I'm going to assume you know how to do that.
Determine for each edge of the triangle if it intersects the circle. (I wrote up one method here, or see any computational geometry book.) You'll need to compute the point or points of intersection (if any) for use in step 4.
Determine which of the nine cases you have.
Compute the area of the intersection. Cases 1, 2, and 9 are easy. In the remaining six cases I've drawn dashed lines to show how to partition the area of intersection into triangles and circular segments based on the original vertices of the triangle, and on the points of intersection you computed in step 2.
This algorithm is going to be rather delicate and prone to errors that affect only one of the cases, so make sure you have test cases that cover all nine cases (and I suggest permuting the vertices of the test triangles too). Pay particular attention to cases in which one of the vertices of the triangle is on the edge of the circle.
If you don't need an exact solution, then rasterizing the figures and counting the pixels in the intersection (as suggested by a couple of other respondents) seems like a much easier approach to code, and correspondingly less prone to errors.
I'm almost a year and a half late, but I thought maybe people will be interested in code here that I wrote which I think does this correctly. Look in function IntersectionArea near the bottom. The general approach is to pick off the convex polygon circumscribed by the circle, and then deal with the little circular caps.
Assuming you're talking integer pixels, not real, the naive implementation would be to loop through every pixel of the triangle and check the distance from the circle's center against its radius.
It's not a cute formula, or particularly fast, but it does get the job done.
try computational geometry
Note: this is not a trivial problem, I hope it's not homework ;-)
If you have a GPU at your disposal, you could use this technique for obtaining a pixel count of the intersection..
I think you shouldn't approximate circle as some set of triangles, instead of that you can approximate it's shape with a polygon.
The naive algorithm can look like:
Convert you circle to polygon with some desired number of vertices.
Calculate the intersection of two polygons (converted circle and a triangle).
Calculate square of that intersection.
You can optimize this algorithm by combining step 2 and step 3 into single function.
Read this links:
Area of convex polygon
Intersection of convex polygons
Since your shapes are convex, you can use Monte Carlo area estimation.
Draw a box around the circle and triangle.
Choose random points in the box and keep a count of how many fall in the circle, and how many fall in both the circle and triangle.
Area of Intersection ≅ Area of circle * # points in circle and triangle / # points in circle
Stop choosing points when the estimated area doesn't change by more than a certain amount over a certain number of rounds, or just choose a fixed number of points based on the area of the box. The area estimate should converge pretty fast unless one of your shapes has very little area.
Note: Here's how you determine if a point is in a triangle: Barycentric coordinates
How exact do you need to be? If you can approximate the circle with simpler shapes, you can simplify the problem. It wouldn't be hard to model a circle as a set of very narrow triangles meeting at the center, for example.
My first instinct would be to transform everything so that the circle is centered on origin, trans the triangle to polar coordinates, and solve for the intersection (or encompassment) of the triangle with the circle. I haven't actually worked it through on paper yet though so that's only a hunch.
If just one of the triangle's line segments intersects the circle, the pure math solution isn't too hard. Once you know when the two points of intersection are, you can use the distance formula to find the chord length.
According to these equations:
ϑ = 2 sin⁻¹(0.5 c / r)
A = 0.5 r² (ϑ - sin(ϑ))
where c is the chord length, r is the radius, ϑ becomes the angle through the center, and A is the area. Note that this solution breaks if more than half the circle is cut off.
It's probably not worth the effort if you just need an approximation, since it makes several assumptions about what the actual intersection looks like.

Resources