Hard Faced Geodesic Sphere - algorithm

This is an image of the type of geometry I'm looking to create.
I'm wanting to create an algorithm for a geodesic sphere like this. My thing is I need the faces of the hexagons & pentagons to be flat not spherical in nature like most algorithms I find. It would also be nice if the algorithm could find the next step so I can move up 'tesselations' for lack of a better word. Essentially have smaller hexagons & pentagons that approximate it.
Found a few ways to make some of the lower tesselations, but even on that would make one like the attached picture would be enough for my task. The faces need to be hard though for my project.

Here is a script to generate a regular icosahedron.
And then you can subdivide each triangle with this algorithm (C# with Vector3 struct):
Vector3[] icoTriangleCorners = new Vector3[3];
public void Subdivide(HashSet<Vector3[]> faces, int subdivision, double radius) {
Vector3 x = (icoTriangleCorners[1] - icoTriangleCorners[0]) / (subdivision + 1);
Vector3 y = (icoTriangleCorners[2] - icoTriangleCorners[1]) / (subdivision + 1);
for(int i = 0; i <= subdivision; ++i) {
for(int j = 0; j <= i; ++j) {
Vector3 a = (icoTriangleCorners[0] + i * x + j * y).normalized * radius;
Vector3 b = (icoTriangleCorners[0] + (i + 1) * x + j * y).normalized * radius;
Vector3 c = (icoTriangleCorners[0] + (i + 1) * x + (j + 1) * y).normalized * radius;
faces.Add(new Vector3[] {a, b, c});
if (i == j) continue;
Vector3 d = (icoTriangleCorners[0] + i * x + (j + 1) * y).normalized * radius;
faces.Add(new Vector3[] {c, d, a});
}
}
}
It generates the new triangle faces with vertices in the same clockwise order.
The vertices of the geodesic sphere are the centroids of the triangles. The centers of the pentagons and hexagons are the corners of the triangles.
You can improve the algorythm by creating a new class for the corners and checking if the Vector3 of the corner has already been created. You can then collect the triangles joining a corner in the corner object. This approach makes the generation of the final faces easier.

Related

Confusion about zFar and zNear plane offsets using glm::perspective

I have been using glm to help build a software rasterizer for self education. In my camera class I am using glm::lookat() to create my view matrix and glm::perspective() to create my perspective matrix.
I seem to be getting what I expect for my left, right top and bottom clipping planes. However, I seem to be either doing something wrong for my near/far planes of there is an error in my understanding. I have reached a point in which my "google-fu" has failed me.
Operating under the assumption that I am correctly extracting clip planes from my glm::perspective matrix, and using the general plane equation:
aX+bY+cZ+d = 0
I am getting strange d or "offset" values for my zNear and zFar planes.
It is my understanding that the d value is the value of which I would be shifting/translatin the point P0 of a plane along the normal vector.
They are 0.200200200 and -0.200200200 respectively. However, my normals are correct orientated at +1.0f and -1.f along the z-axis as expected for a plane perpendicular to my z basis vector.
So when testing a point such as the (0, 0, -5) world space against these planes, it is transformed by my view matrix to:
(0, 0, 5.81181192)
so testing it against these plane in a clip chain, said example vertex would be culled.
Here is the start of a camera class establishing the relevant matrices:
static constexpr glm::vec3 UPvec(0.f, 1.f, 0.f);
static constexpr auto zFar = 100.f;
static constexpr auto zNear = 0.1f;
Camera::Camera(glm::vec3 eye, glm::vec3 center, float fovY, float w, float h) :
viewMatrix{ glm::lookAt(eye, center, UPvec) },
perspectiveMatrix{ glm::perspective(glm::radians<float>(fovY), w/h, zNear, zFar) },
frustumLeftPlane {setPlane(0, 1)},
frustumRighPlane {setPlane(0, 0)},
frustumBottomPlane {setPlane(1, 1)},
frustumTopPlane {setPlane(1, 0)},
frstumNearPlane {setPlane(2, 0)},
frustumFarPlane {setPlane(2, 1)},
The frustum objects are based off the following struct:
struct Plane
{
glm::vec4 normal;
float offset;
};
I have extracted the 6 clipping planes from the perspective matrix as below:
Plane Camera::setPlane(const int& row, const bool& sign)
{
float temp[4]{};
Plane plane{};
if (sign == 0)
{
for (int i = 0; i < 4; ++i)
{
temp[i] = perspectiveMatrix[i][3] + perspectiveMatrix[i][row];
}
}
else
{
for (int i = 0; i < 4; ++i)
{
temp[i] = perspectiveMatrix[i][3] - perspectiveMatrix[i][row];
}
}
plane.normal.x = temp[0];
plane.normal.y = temp[1];
plane.normal.z = temp[2];
plane.normal.w = 0.f;
plane.offset = temp[3];
plane.normal = glm::normalize(plane.normal);
return plane;
}
Any help would be appreciated, as now I am at a loss.
Many thanks.
The d parameter of a plane equation describes how much the plane is offset from the origin along the plane normal. This also takes into account the length of the normal.
One can't just normalize the normal without also adjusting the d parameter since normalizing changes the length of the normal. If you want to normalize a plane equation then you also have to apply the division step to the d coordinate:
float normalLength = sqrt(temp[0] * temp[0] + temp[1] * temp[1] + temp[2] * temp[2]);
plane.normal.x = temp[0] / normalLength;
plane.normal.y = temp[1] / normalLength;
plane.normal.z = temp[2] / normalLength;
plane.normal.w = 0.f;
plane.offset = temp[3] / normalLength;
Side note 1: Usually, one would store the offset of a plane equation in the w-coordinate of a vec4 instead of a separate variable. The reason is that the typical operation you perform with it is a point to plane distance check like dist = n * x - d (for a given point x, normal n, offset d, * is dot product), which can then be written as dist = [n, d] * [x, -1].
Side note 2: Most software and also hardware rasterizer perform clipping after the projection step since it's cheaper and easier to implement.

Find a Circle ((x,y,r)) that has maximum number of points 'on' it; given a set of points(x,y) in a 2D plane

Given N points on a plane (of form (x, y)), find the circle that have maximum number of points on it ?
P.S. : The point should lie on the circumference of the circle.
What is the most efficient algorithm to solve this problem and how does it work? Which Data structures would you use to solve this problem. This was asked in one of the FANG coding interviews.
As a starting point, the simple O(N3) solution is to find the circle corresponding to each unique triple of points, while counting the number of occurrences of each circle you find.
If a circle has N points on it, then you will find it N-choose-3 times, so the circle you find most often is the one with the most points on it.
There are complications in any actual implementation, but they are different complications depending on how your points are represented and whether you want exact or approximate answers.
Case 1 : Hough Transform
In computer vision problem solving we often search for circles amongst edge information. This problem is characterised by having many data points, possibly originating from many different circles in the presence of a great deal of noise.
The usual approach to solving this problem is the Hough Transform https://en.wikipedia.org/wiki/Circle_Hough_Transform. The basic idea is to sum the evidence for the circles which can pass through each point (x, y).
We create an integer array called Hough [a, b, r] which parametrises all the possible circles that can pass through your point (x,y). This is equivalent to drawing a circle of radius 1 in the r=1 plane centred on (x,y); a circle of radius 2 in the r=2 plane centred on (x,y) etc.
Each time a circle is drawn through a point in [a, b, r] we add 1 to the corresponding value. Some points accumulate a lot of evidence. These points correspond to the circles of interest.
Image from cis.rit.edu illustrates what happens in one of the r-planes.
Doing this for each point (x,y) will generate evidence towards each of the points in [a,b,r] corresponding to the circle you seek. So just scan this array to find the point with the maximum evidence. That is your circle.
Example of Hough Transform
Knowing the radius of the circle reduces this from an O(n^3) problem to a O(n^2) problem as only one plane needs to be constructed and scanned. I have also had good results plotting the radii in a log space to give a (less accurate) O(n^2 log n) algorithm.
Case 2 : Circle Fitting
If the points are known to lie near the boundary of a single circle, and/or if the points are not very numerous and/or else we are very sure that there is very little noise then the Hough transform is a poor solution as it is computationally intensive, memory hungry and because of the raster nature of the accumulator array possibly not very accurate.
In this case we may want to fit a circle by analogy to line fitting techniques that use linear regression. A discussion of circle fitting techniques can be found in https://pdfs.semanticscholar.org/faac/44067f04abf10af7dd583fca0c35c5937f95.pdf.
A (rather simple minded) implementation of this algorithm is presented below.
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
typedef struct {
float x;
float y;
} point;
/*
* Function using a modified least squares approach to circle fitting.
*
* Reference :
*
* Umbach, D. and Jones, K. N. "A few methods for fitting circles to data",
* IEEE Trans Instrumentation and Measurement
* vol XX(Y) 2000
*
* https://pdfs.semanticscholar.org/faac/44067f04abf10af7dd583fca0c35c5937f95.pdf
*
* NOTES
*
* The code below has not been checked for numerical stability or conditioning.
*/
int circle_fit_MLS (point P[], int n, double *x_pos, double *y_pos, double *radius)
{
int i;
double sum_x=0.0, sum_y=0.0, sum_xx=0.0, sum_xy=0.0, sum_yy=0.0, sum_xyy=0.0, sum_yxx=0.0, sum_xxx=0.0, sum_yyy=0.0;
double A, B, C, D, E;
double x2, y2, xy, F, xdif, ydif;
for (i=0; i<n; i++)
{
sum_x += P[i].x;
sum_y += P[i].y;
x2 = P[i].x * P[i].x;
y2 = P[i].y * P[i].y;
xy = P[i].x * P[i].y;
sum_xx += x2;
sum_yy += y2;
sum_xy += xy;
sum_xyy += xy*P[i].y;
sum_yxx += P[i].y*x2;
sum_xxx += x2*P[i].x;
sum_yyy += y2*P[i].y;
}
A = n * sum_xx - sum_x * sum_x;
B = n * sum_xy - sum_x * sum_y;
C = n * sum_yy - sum_y * sum_y;
D = 0.5 * ( n * (sum_xyy + sum_xxx) - sum_x * sum_yy - sum_x * sum_xx);
E = 0.5 * ( n * (sum_yxx + sum_yyy) - sum_y * sum_xx - sum_y * sum_yy);
F = A*C - B*B;
*x_pos = (D*C - B*E) / F;
*y_pos = (A*E - B*D) / F;
*radius = 0;
for (i=0; i<n; i++)
{
xdif = P[i].x - *x_pos;
ydif = P[i].y - *y_pos;
*radius += sqrt(xdif * xdif + ydif * ydif);
}
*radius /= n;
return 0;
}
The main program below can be used for testing the code. Please post back any results/observations/suggestions for improvement in the comments.
int main()
{
point *P;
int n, i;
double xpos, ypos, radius;
printf ("Please enter the number of points \n> ");
scanf ("%d", &n);
P = malloc (n * sizeof(point));
for (i=0; i<n; i++)
{
printf ("x%d = ", i);
scanf ("%f", &P[i].x);
printf ("y%d = ", i);
scanf ("%f", &P[i].y);
}
circle_fit_MLS (P, n, &xpos, &ypos, &radius);
printf (" a = %f\n", xpos);
printf (" b = %f\n", ypos);
printf (" r = %f\n", radius);
}

Processing - creating circles from current pixels

I'm using processing, and I'm trying to create a circle from the pixels i have on my display.
I managed to pull the pixels on screen and create a growing circle from them.
However i'm looking for something much more sophisticated, I want to make it seem as if the pixels on the display are moving from their current location and forming a turning circle or something like this.
This is what i have for now:
int c = 0;
int radius = 30;
allPixels = removeBlackP();
void draw {
loadPixels();
for (int alpha = 0; alpha < 360; alpha++)
{
float xf = 350 + radius*cos(alpha);
float yf = 350 + radius*sin(alpha);
int x = (int) xf;
int y = (int) yf;
if (radius > 200) {radius =30;break;}
if (c> allPixels.length) {c= 0;}
pixels[y*700 +x] = allPixels[c];
updatePixels();
}
radius++;
c++;
}
the function removeBlackP return an array with all the pixels except for the black ones.
This code works for me. There is an issue that the circle only has the numbers as int so it seems like some pixels inside the circle won't fill, i can live with that. I'm looking for something a bit more complex like I explained.
Thanks!
Fill all pixels of scanlines belonging to the circle. Using this approach, you will paint all places inside the circle. For every line calculate start coordinate (end one is symmetric). Pseudocode:
for y = center_y - radius; y <= center_y + radius; y++
dx = Sqrt(radius * radius - y * y)
for x = center_x - dx; x <= center_x + dx; x++
fill a[y, x]
When you find places for all pixels, you can make correlation between initial pixels places and calculated ones and move them step-by-step.
For example, if initial coordinates relative to center point for k-th pixel are (x0, y0) and final coordinates are (x1,y1), and you want to make M steps, moving pixel by spiral, calculate intermediate coordinates:
calc values once:
r0 = Sqrt(x0*x0 + y0*y0) //Math.Hypot if available
r1 = Sqrt(x1*x1 + y1*y1)
fi0 = Math.Atan2(y0, x0)
fi1 = Math.Atan2(y1, x1)
if fi1 < fi0 then
fi1 = fi1 + 2 * Pi;
for i = 1; i <=M ; i++
x = (r0 + i / M * (r1 - r0)) * Cos(fi0 + i / M * (fi1 - fi0))
y = (r0 + i / M * (r1 - r0)) * Sin(fi0 + i / M * (fi1 - fi0))
shift by center coordinates
The way you go about drawing circles in Processing looks a little convoluted.
The simplest way is to use the ellipse() function, no pixels involved though:
If you do need to draw an ellipse and use pixels, you can make use of PGraphics which is similar to using a separate buffer/"layer" to draw into using Processing drawing commands but it also has pixels[] you can access.
Let's say you want to draw a low-res pixel circle circle, you can create a small PGraphics, disable smoothing, draw the circle, then render the circle at a higher resolution. The only catch is these drawing commands must be placed within beginDraw()/endDraw() calls:
PGraphics buffer;
void setup(){
//disable sketch's aliasing
noSmooth();
buffer = createGraphics(25,25);
buffer.beginDraw();
//disable buffer's aliasing
buffer.noSmooth();
buffer.noFill();
buffer.stroke(255);
buffer.endDraw();
}
void draw(){
background(255);
//draw small circle
float circleSize = map(sin(frameCount * .01),-1.0,1.0,0.0,20.0);
buffer.beginDraw();
buffer.background(0);
buffer.ellipse(buffer.width / 2,buffer.height / 2, circleSize,circleSize);
buffer.endDraw();
//render small circle at higher resolution (blocky - no aliasing)
image(buffer,0,0,width,height);
}
If you want to manually draw a circle using pixels[] you are on the right using the polar to cartesian conversion formula (x = cos(angle) * radius, y = sin(angle) * radius).Even though it's focusing on drawing a radial gradient, you can find an example of drawing a circle(a lot actually) using pixels in this answer

Least distance from a point to an area

I am trying to find a point (P2) in a closed area that has the minimum distance to a point (P1). The area is built of homogenous pixels, it is not shaped perfectly and it is not necessarily convex. This is basically a problem of reaching an area from the shortest path.
The whole space is a stored in the form of a bitmap in the memory. What is the best method to find P2? Should I go with random search (optimization) methods? Optimization methods do not give the exact minimum but they are faster than brute forcing every pixel of the area. I need to perform thousands of these decisions in a few seconds.
The MinX,MinY,MaxX,MaxY of the area is available.
Thanks.
Here is my code, it's a discrete version using discrete coordinates:
Hint: the method I used to find the circumference of the Area is simple, it's like how do you know the beach from the land ? answer: the beach is covered by Sea from one side, so in my graph matrix, NULL reference is Sea, Points are Land!
Class Point:
class Point
{
public int x;
public int y;
public Point (int X, int Y)
{
this.x = X;
this.y = Y;
}
}
Class Area:
class Area
{
public ArrayList<Point> points;
public Area ()
{
p = new ArrayList<Point>();
}
}
Discrete Distance Utility Class:
class DiscreteDistance
{
public static int distance (Point a, Point b)
{
return Math.sqrt(Math.pow(b.x - a.x,2), Math.pow(b.y - a.y,2))
}
public static int distance (Point a, Area area)
{
ArrayList<Point> cir = circumference(area);
int d = null;
for (Point b : cir)
{
if (d == null || distance(a,b) < d)
{
d = distance(a,b);
}
}
return d;
}
ArrayList<Point> circumference (Area area)
{
int minX = 0;
int minY = 0;
int maxX = 0;
int maxY = 0;
for (Point p : area.points)
{
if (p.x < minX) minX = p.x;
if (p.x > maxX) maxX = p.x;
if (p.y < minY) minY = p.y;
if (p.y > maxY) maxY = p.y;
}
int w = maxX - minX +1;
int h = maxY - minY +1;
Point[][] graph = new Point[w][h];
for (Point p : area.points)
{
graph[p.x - minX][p.y - minY] = p;
}
ArrayList<Point> cir = new ArrayList<Point>();
for (int i=0; i<w; i++)
{
for (int j=0; j<h; j++)
{
if ((i > 0 && graph[i-1][j] == null)
|| (i < (w-1) && graph[i+1][j] == null)
|| (j > 0 && graph[i][j-1] == null)
|| (i < (h-1) && graph[i][j+1] == null))
{
cir.add(graph[i][j]);
}
}
}
return cir;
}
}
We have to assume you know or can easily find at least one pixel address (x0,y0) inside the area. The fastest solution will certainly be to search from this pixel in a straight line, say in the plus x direction Alternately, since you have a bounding box, pick the compass point toward the nearest boundary and go in that direction.
When you find the edge of the region, search depth first along the boundary. For general polygons with self-intersections and/or holes, this will have to be a complete and carefully implemented DFS maintaining a set of already-visited vertices. Only if the polygon is simple will it suffice to remember only the last-visited pixel to avoid doubling back over what's already searched.
During the DFS, compute the distance squared to p1 for each boundary pixel and track the minimum.
Note, if you are really pressed for performance this distance squared can be updated incrementally to replace multiplications with additions. I.e. if you know d2=(x2-x1)^2+(y2-y1)^2 and then increment x2 by 1 to take the next step around the boundary, the new squared distance is
((x2+1) - x1)^2 + (y2-y1)^2 = d2 + 2(x2 - x1) + 1
So you can update d2 with d2 += 2(x2 - x1) + 1. The multiplication by 2 is of course just a left shift, so this is very cheap. There are similar very cheap updates for steps in each direction.
One approach might be to set for an approximate solution by first canculating a triangulation of the area; afterwards, only the corners of the triangles have to be checked. This approach could be beneficial especially if in the many evaluations you plan for, the outside point changes but the shape itself does not change.
You could find the center of the rect of the area, and use a triangle between the two points to find the angle, and then use a function f(x) = mx + b to do the pixel walk until you find a pixel of the area to calculate the distance, and then rotate the angle until you find the shortest path.

How can I determine whether a 2D Point is within a Polygon?

I'm trying to create a fast 2D point inside polygon algorithm, for use in hit-testing (e.g. Polygon.contains(p:Point)). Suggestions for effective techniques would be appreciated.
For graphics, I'd rather not prefer integers. Many systems use integers for UI painting (pixels are ints after all), but macOS, for example, uses float for everything. macOS only knows points and a point can translate to one pixel, but depending on monitor resolution, it might translate to something else. On retina screens half a point (0.5/0.5) is pixel. Still, I never noticed that macOS UIs are significantly slower than other UIs. After all, 3D APIs (OpenGL or Direct3D) also work with floats and modern graphics libraries very often take advantage of GPU acceleration.
Now you said speed is your main concern, okay, let's go for speed. Before you run any sophisticated algorithm, first do a simple test. Create an axis aligned bounding box around your polygon. This is very easy, fast and can already save you a lot of calculations. How does that work? Iterate over all points of the polygon and find the min/max values of X and Y.
E.g. you have the points (9/1), (4/3), (2/7), (8/2), (3/6). This means Xmin is 2, Xmax is 9, Ymin is 1 and Ymax is 7. A point outside of the rectangle with the two edges (2/1) and (9/7) cannot be within the polygon.
// p is your point, p.x is the x coord, p.y is the y coord
if (p.x < Xmin || p.x > Xmax || p.y < Ymin || p.y > Ymax) {
// Definitely not within the polygon!
}
This is the first test to run for any point. As you can see, this test is ultra fast but it's also very coarse. To handle points that are within the bounding rectangle, we need a more sophisticated algorithm. There are a couple of ways how this can be calculated. Which method works also depends on whether the polygon can have holes or will always be solid. Here are examples of solid ones (one convex, one concave):
And here's one with a hole:
The green one has a hole in the middle!
The easiest algorithm, that can handle all three cases above and is still pretty fast is named ray casting. The idea of the algorithm is pretty simple: Draw a virtual ray from anywhere outside the polygon to your point and count how often it hits a side of the polygon. If the number of hits is even, it's outside of the polygon, if it's odd, it's inside.
The winding number algorithm would be an alternative, it is more accurate for points being very close to a polygon line but it's also much slower. Ray casting may fail for points too close to a polygon side because of limited floating point precision and rounding issues, but in reality that is hardly a problem, as if a point lies that close to a side, it's often visually not even possible for a viewer to recognize if it is already inside or still outside.
You still have the bounding box of above, remember? Just pick a point outside the bounding box and use it as starting point for your ray. E.g. the point (Xmin - e/p.y) is outside the polygon for sure.
But what is e? Well, e (actually epsilon) gives the bounding box some padding. As I said, ray tracing fails if we start too close to a polygon line. Since the bounding box might equal the polygon (if the polygon is an axis aligned rectangle, the bounding box is equal to the polygon itself!), we need some padding to make this safe, that's all. How big should you choose e? Not too big. It depends on the coordinate system scale you use for drawing. If your pixel step width is 1.0, then just choose 1.0 (yet 0.1 would have worked as well)
Now that we have the ray with its start and end coordinates, the problem shifts from "is the point within the polygon" to "how often does the ray intersects a polygon side". Therefore we can't just work with the polygon points as before, now we need the actual sides. A side is always defined by two points.
side 1: (X1/Y1)-(X2/Y2)
side 2: (X2/Y2)-(X3/Y3)
side 3: (X3/Y3)-(X4/Y4)
:
You need to test the ray against all sides. Consider the ray to be a vector and every side to be a vector. The ray has to hit each side exactly once or never at all. It can't hit the same side twice. Two lines in 2D space will always intersect exactly once, unless they are parallel, in which case they never intersect. However since vectors have a limited length, two vectors might not be parallel and still never intersect because they are too short to ever meet each other.
// Test the ray against all sides
int intersections = 0;
for (side = 0; side < numberOfSides; side++) {
// Test if current side intersects with ray.
// If yes, intersections++;
}
if ((intersections & 1) == 1) {
// Inside of polygon
} else {
// Outside of polygon
}
So far so well, but how do you test if two vectors intersect? Here's some C code (not tested), that should do the trick:
#define NO 0
#define YES 1
#define COLLINEAR 2
int areIntersecting(
float v1x1, float v1y1, float v1x2, float v1y2,
float v2x1, float v2y1, float v2x2, float v2y2
) {
float d1, d2;
float a1, a2, b1, b2, c1, c2;
// Convert vector 1 to a line (line 1) of infinite length.
// We want the line in linear equation standard form: A*x + B*y + C = 0
// See: http://en.wikipedia.org/wiki/Linear_equation
a1 = v1y2 - v1y1;
b1 = v1x1 - v1x2;
c1 = (v1x2 * v1y1) - (v1x1 * v1y2);
// Every point (x,y), that solves the equation above, is on the line,
// every point that does not solve it, is not. The equation will have a
// positive result if it is on one side of the line and a negative one
// if is on the other side of it. We insert (x1,y1) and (x2,y2) of vector
// 2 into the equation above.
d1 = (a1 * v2x1) + (b1 * v2y1) + c1;
d2 = (a1 * v2x2) + (b1 * v2y2) + c1;
// If d1 and d2 both have the same sign, they are both on the same side
// of our line 1 and in that case no intersection is possible. Careful,
// 0 is a special case, that's why we don't test ">=" and "<=",
// but "<" and ">".
if (d1 > 0 && d2 > 0) return NO;
if (d1 < 0 && d2 < 0) return NO;
// The fact that vector 2 intersected the infinite line 1 above doesn't
// mean it also intersects the vector 1. Vector 1 is only a subset of that
// infinite line 1, so it may have intersected that line before the vector
// started or after it ended. To know for sure, we have to repeat the
// the same test the other way round. We start by calculating the
// infinite line 2 in linear equation standard form.
a2 = v2y2 - v2y1;
b2 = v2x1 - v2x2;
c2 = (v2x2 * v2y1) - (v2x1 * v2y2);
// Calculate d1 and d2 again, this time using points of vector 1.
d1 = (a2 * v1x1) + (b2 * v1y1) + c2;
d2 = (a2 * v1x2) + (b2 * v1y2) + c2;
// Again, if both have the same sign (and neither one is 0),
// no intersection is possible.
if (d1 > 0 && d2 > 0) return NO;
if (d1 < 0 && d2 < 0) return NO;
// If we get here, only two possibilities are left. Either the two
// vectors intersect in exactly one point or they are collinear, which
// means they intersect in any number of points from zero to infinite.
if ((a1 * b2) - (a2 * b1) == 0.0f) return COLLINEAR;
// If they are not collinear, they must intersect in exactly one point.
return YES;
}
The input values are the two endpoints of vector 1 (v1x1/v1y1 and v1x2/v1y2) and vector 2 (v2x1/v2y1 and v2x2/v2y2). So you have 2 vectors, 4 points, 8 coordinates. YES and NO are clear. YES increases intersections, NO does nothing.
What about COLLINEAR? It means both vectors lie on the same infinite line, depending on position and length, they don't intersect at all or they intersect in an endless number of points. I'm not absolutely sure how to handle this case, I would not count it as intersection either way. Well, this case is rather rare in practice anyway because of floating point rounding errors; better code would probably not test for == 0.0f but instead for something like < epsilon, where epsilon is a rather small number.
If you need to test a larger number of points, you can certainly speed up the whole thing a bit by keeping the linear equation standard forms of the polygon sides in memory, so you don't have to recalculate these every time. This will save you two floating point multiplications and three floating point subtractions on every test in exchange for storing three floating point values per polygon side in memory. It's a typical memory vs computation time trade off.
Last but not least: If you may use 3D hardware to solve the problem, there is an interesting alternative. Just let the GPU do all the work for you. Create a painting surface that is off screen. Fill it completely with the color black. Now let OpenGL or Direct3D paint your polygon (or even all of your polygons if you just want to test if the point is within any of them, but you don't care for which one) and fill the polygon(s) with a different color, e.g. white. To check if a point is within the polygon, get the color of this point from the drawing surface. This is just a O(1) memory fetch.
Of course this method is only usable if your drawing surface doesn't have to be huge. If it cannot fit into the GPU memory, this method is slower than doing it on the CPU. If it would have to be huge and your GPU supports modern shaders, you can still use the GPU by implementing the ray casting shown above as a GPU shader, which absolutely is possible. For a larger number of polygons or a large number of points to test, this will pay off, consider some GPUs will be able to test 64 to 256 points in parallel. Note however that transferring data from CPU to GPU and back is always expensive, so for just testing a couple of points against a couple of simple polygons, where either the points or the polygons are dynamic and will change frequently, a GPU approach will rarely pay off.
I think the following piece of code is the best solution (taken from here):
int pnpoly(int nvert, float *vertx, float *verty, float testx, float testy)
{
int i, j, c = 0;
for (i = 0, j = nvert-1; i < nvert; j = i++) {
if ( ((verty[i]>testy) != (verty[j]>testy)) &&
(testx < (vertx[j]-vertx[i]) * (testy-verty[i]) / (verty[j]-verty[i]) + vertx[i]) )
c = !c;
}
return c;
}
Arguments
nvert: Number of vertices in the polygon. Whether to repeat the first vertex at the end has been discussed in the article referred above.
vertx, verty: Arrays containing the x- and y-coordinates of the polygon's vertices.
testx, testy: X- and y-coordinate of the test point.
It's both short and efficient and works both for convex and concave polygons. As suggested before, you should check the bounding rectangle first and treat polygon holes separately.
The idea behind this is pretty simple. The author describes it as follows:
I run a semi-infinite ray horizontally (increasing x, fixed y) out from the test point, and count how many edges it crosses. At each crossing, the ray switches between inside and outside. This is called the Jordan curve theorem.
The variable c is switching from 0 to 1 and 1 to 0 each time the horizontal ray crosses any edge. So basically it's keeping track of whether the number of edges crossed are even or odd. 0 means even and 1 means odd.
Here is a C# version of the answer given by nirg, which comes from this RPI professor. Note that use of the code from that RPI source requires attribution.
A bounding box check has been added at the top. However, as James Brown points out, the main code is almost as fast as the bounding box check itself, so the bounding box check can actually slow the overall operation, in the case that most of the points you are checking are inside the bounding box. So you could leave the bounding box check out, or an alternative would be to precompute the bounding boxes of your polygons if they don't change shape too often.
public bool IsPointInPolygon( Point p, Point[] polygon )
{
double minX = polygon[ 0 ].X;
double maxX = polygon[ 0 ].X;
double minY = polygon[ 0 ].Y;
double maxY = polygon[ 0 ].Y;
for ( int i = 1 ; i < polygon.Length ; i++ )
{
Point q = polygon[ i ];
minX = Math.Min( q.X, minX );
maxX = Math.Max( q.X, maxX );
minY = Math.Min( q.Y, minY );
maxY = Math.Max( q.Y, maxY );
}
if ( p.X < minX || p.X > maxX || p.Y < minY || p.Y > maxY )
{
return false;
}
// https://wrf.ecse.rpi.edu/Research/Short_Notes/pnpoly.html
bool inside = false;
for ( int i = 0, j = polygon.Length - 1 ; i < polygon.Length ; j = i++ )
{
if ( ( polygon[ i ].Y > p.Y ) != ( polygon[ j ].Y > p.Y ) &&
p.X < ( polygon[ j ].X - polygon[ i ].X ) * ( p.Y - polygon[ i ].Y ) / ( polygon[ j ].Y - polygon[ i ].Y ) + polygon[ i ].X )
{
inside = !inside;
}
}
return inside;
}
Here is a JavaScript variant of the answer by M. Katz based on Nirg's approach:
function pointIsInPoly(p, polygon) {
var isInside = false;
var minX = polygon[0].x, maxX = polygon[0].x;
var minY = polygon[0].y, maxY = polygon[0].y;
for (var n = 1; n < polygon.length; n++) {
var q = polygon[n];
minX = Math.min(q.x, minX);
maxX = Math.max(q.x, maxX);
minY = Math.min(q.y, minY);
maxY = Math.max(q.y, maxY);
}
if (p.x < minX || p.x > maxX || p.y < minY || p.y > maxY) {
return false;
}
var i = 0, j = polygon.length - 1;
for (i, j; i < polygon.length; j = i++) {
if ( (polygon[i].y > p.y) != (polygon[j].y > p.y) &&
p.x < (polygon[j].x - polygon[i].x) * (p.y - polygon[i].y) / (polygon[j].y - polygon[i].y) + polygon[i].x ) {
isInside = !isInside;
}
}
return isInside;
}
Compute the oriented sum of angles between the point p and each of the polygon apices. If the total oriented angle is 360 degrees, the point is inside. If the total is 0, the point is outside.
I like this method better because it is more robust and less dependent on numerical precision.
Methods that compute evenness of number of intersections are limited because you can 'hit' an apex during the computation of the number of intersections.
EDIT: By The Way, this method works with concave and convex polygons.
EDIT: I recently found a whole Wikipedia article on the topic.
This question is so interesting. I have another workable idea different from other answers to this post. The idea is to use the sum of angles to decide whether the target is inside or outside. Better known as winding number.
Let x be the target point. Let array [0, 1, .... n] be the all the points of the area. Connect the target point with every border point with a line. If the target point is inside of this area. The sum of all angles will be 360 degrees. If not the angles will be less than 360.
Refer to this image to get a basic understanding of the idea:
My algorithm assumes the clockwise is the positive direction. Here is a potential input:
[[-122.402015, 48.225216], [-117.032049, 48.999931], [-116.919132, 45.995175], [-124.079107, 46.267259], [-124.717175, 48.377557], [-122.92315, 47.047963], [-122.402015, 48.225216]]
The following is the python code that implements the idea:
def isInside(self, border, target):
degree = 0
for i in range(len(border) - 1):
a = border[i]
b = border[i + 1]
# calculate distance of vector
A = getDistance(a[0], a[1], b[0], b[1]);
B = getDistance(target[0], target[1], a[0], a[1])
C = getDistance(target[0], target[1], b[0], b[1])
# calculate direction of vector
ta_x = a[0] - target[0]
ta_y = a[1] - target[1]
tb_x = b[0] - target[0]
tb_y = b[1] - target[1]
cross = tb_y * ta_x - tb_x * ta_y
clockwise = cross < 0
# calculate sum of angles
if(clockwise):
degree = degree + math.degrees(math.acos((B * B + C * C - A * A) / (2.0 * B * C)))
else:
degree = degree - math.degrees(math.acos((B * B + C * C - A * A) / (2.0 * B * C)))
if(abs(round(degree) - 360) <= 3):
return True
return False
The Eric Haines article cited by bobobobo is really excellent. Particularly interesting are the tables comparing performance of the algorithms; the angle summation method is really bad compared to the others. Also interesting is that optimisations like using a lookup grid to further subdivide the polygon into "in" and "out" sectors can make the test incredibly fast even on polygons with > 1000 sides.
Anyway, it's early days but my vote goes to the "crossings" method, which is pretty much what Mecki describes I think. However I found it most succintly described and codified by David Bourke. I love that there is no real trigonometry required, and it works for convex and concave, and it performs reasonably well as the number of sides increases.
By the way, here's one of the performance tables from the Eric Haines' article for interest, testing on random polygons.
number of edges per polygon
3 4 10 100 1000
MacMartin 2.9 3.2 5.9 50.6 485
Crossings 3.1 3.4 6.8 60.0 624
Triangle Fan+edge sort 1.1 1.8 6.5 77.6 787
Triangle Fan 1.2 2.1 7.3 85.4 865
Barycentric 2.1 3.8 13.8 160.7 1665
Angle Summation 56.2 70.4 153.6 1403.8 14693
Grid (100x100) 1.5 1.5 1.6 2.1 9.8
Grid (20x20) 1.7 1.7 1.9 5.7 42.2
Bins (100) 1.8 1.9 2.7 15.1 117
Bins (20) 2.1 2.2 3.7 26.3 278
Really like the solution posted by Nirg and edited by bobobobo. I just made it javascript friendly and a little more legible for my use:
function insidePoly(poly, pointx, pointy) {
var i, j;
var inside = false;
for (i = 0, j = poly.length - 1; i < poly.length; j = i++) {
if(((poly[i].y > pointy) != (poly[j].y > pointy)) && (pointx < (poly[j].x-poly[i].x) * (pointy-poly[i].y) / (poly[j].y-poly[i].y) + poly[i].x) ) inside = !inside;
}
return inside;
}
Swift version of the answer by nirg:
extension CGPoint {
func isInsidePolygon(vertices: [CGPoint]) -> Bool {
guard !vertices.isEmpty else { return false }
var j = vertices.last!, c = false
for i in vertices {
let a = (i.y > y) != (j.y > y)
let b = (x < (j.x - i.x) * (y - i.y) / (j.y - i.y) + i.x)
if a && b { c = !c }
j = i
}
return c
}
}
Most of the answers in this question are not handling all corner cases well. Some subtle corner cases like below:
This is a javascript version with all corner cases well handled.
/** Get relationship between a point and a polygon using ray-casting algorithm
* #param {{x:number, y:number}} P: point to check
* #param {{x:number, y:number}[]} polygon: the polygon
* #returns -1: outside, 0: on edge, 1: inside
*/
function relationPP(P, polygon) {
const between = (p, a, b) => p >= a && p <= b || p <= a && p >= b
let inside = false
for (let i = polygon.length-1, j = 0; j < polygon.length; i = j, j++) {
const A = polygon[i]
const B = polygon[j]
// corner cases
if (P.x == A.x && P.y == A.y || P.x == B.x && P.y == B.y) return 0
if (A.y == B.y && P.y == A.y && between(P.x, A.x, B.x)) return 0
if (between(P.y, A.y, B.y)) { // if P inside the vertical range
// filter out "ray pass vertex" problem by treating the line a little lower
if (P.y == A.y && B.y >= A.y || P.y == B.y && A.y >= B.y) continue
// calc cross product `PA X PB`, P lays on left side of AB if c > 0
const c = (A.x - P.x) * (B.y - P.y) - (B.x - P.x) * (A.y - P.y)
if (c == 0) return 0
if ((A.y < B.y) == (c > 0)) inside = !inside
}
}
return inside? 1 : -1
}
I did some work on this back when I was a researcher under Michael Stonebraker - you know, the professor who came up with Ingres, PostgreSQL, etc.
We realized that the fastest way was to first do a bounding box because it's SUPER fast. If it's outside the bounding box, it's outside. Otherwise, you do the harder work...
If you want a great algorithm, look to the open source project PostgreSQL source code for the geo work...
I want to point out, we never got any insight into right vs left handedness (also expressible as an "inside" vs "outside" problem...
UPDATE
BKB's link provided a good number of reasonable algorithms. I was working on Earth Science problems and therefore needed a solution that works in latitude/longitude, and it has the peculiar problem of handedness - is the area inside the smaller area or the bigger area? The answer is that the "direction" of the verticies matters - it's either left-handed or right handed and in this way you can indicate either area as "inside" any given polygon. As such, my work used solution three enumerated on that page.
In addition, my work used separate functions for "on the line" tests.
...Since someone asked: we figured out that bounding box tests were best when the number of verticies went beyond some number - do a very quick test before doing the longer test if necessary... A bounding box is created by simply taking the largest x, smallest x, largest y and smallest y and putting them together to make four points of a box...
Another tip for those that follow: we did all our more sophisticated and "light-dimming" computing in a grid space all in positive points on a plane and then re-projected back into "real" longitude/latitude, thus avoiding possible errors of wrapping around when one crossed line 180 of longitude and when handling polar regions. Worked great!
The trivial solution would be to divide the polygon to triangles and hit test the triangles as explained here
If your polygon is CONVEX there might be a better approach though. Look at the polygon as a collection of infinite lines. Each line dividing space into two. for every point it's easy to say if its on the one side or the other side of the line. If a point is on the same side of all lines then it is inside the polygon.
David Segond's answer is pretty much the standard general answer, and Richard T's is the most common optimization, though therre are some others. Other strong optimizations are based on less general solutions. For example if you are going to check the same polygon with lots of points, triangulating the polygon can speed things up hugely as there are a number of very fast TIN searching algorithms. Another is if the polygon and points are on a limited plane at low resolution, say a screen display, you can paint the polygon onto a memory mapped display buffer in a given colour, and check the color of a given pixel to see if it lies in the polygons.
Like many optimizations, these are based on specific rather than general cases, and yield beneifits based on amortized time rather than single usage.
Working in this field, i found Joeseph O'Rourkes 'Computation Geometry in C' ISBN 0-521-44034-3 to be a great help.
Java Version:
public class Geocode {
private float latitude;
private float longitude;
public Geocode() {
}
public Geocode(float latitude, float longitude) {
this.latitude = latitude;
this.longitude = longitude;
}
public float getLatitude() {
return latitude;
}
public void setLatitude(float latitude) {
this.latitude = latitude;
}
public float getLongitude() {
return longitude;
}
public void setLongitude(float longitude) {
this.longitude = longitude;
}
}
public class GeoPolygon {
private ArrayList<Geocode> points;
public GeoPolygon() {
this.points = new ArrayList<Geocode>();
}
public GeoPolygon(ArrayList<Geocode> points) {
this.points = points;
}
public GeoPolygon add(Geocode geo) {
points.add(geo);
return this;
}
public boolean inside(Geocode geo) {
int i, j;
boolean c = false;
for (i = 0, j = points.size() - 1; i < points.size(); j = i++) {
if (((points.get(i).getLongitude() > geo.getLongitude()) != (points.get(j).getLongitude() > geo.getLongitude())) &&
(geo.getLatitude() < (points.get(j).getLatitude() - points.get(i).getLatitude()) * (geo.getLongitude() - points.get(i).getLongitude()) / (points.get(j).getLongitude() - points.get(i).getLongitude()) + points.get(i).getLatitude()))
c = !c;
}
return c;
}
}
I realize this is old, but here is a ray casting algorithm implemented in Cocoa, in case anyone is interested. Not sure it is the most efficient way to do things, but it may help someone out.
- (BOOL)shape:(NSBezierPath *)path containsPoint:(NSPoint)point
{
NSBezierPath *currentPath = [path bezierPathByFlatteningPath];
BOOL result;
float aggregateX = 0; //I use these to calculate the centroid of the shape
float aggregateY = 0;
NSPoint firstPoint[1];
[currentPath elementAtIndex:0 associatedPoints:firstPoint];
float olderX = firstPoint[0].x;
float olderY = firstPoint[0].y;
NSPoint interPoint;
int noOfIntersections = 0;
for (int n = 0; n < [currentPath elementCount]; n++) {
NSPoint points[1];
[currentPath elementAtIndex:n associatedPoints:points];
aggregateX += points[0].x;
aggregateY += points[0].y;
}
for (int n = 0; n < [currentPath elementCount]; n++) {
NSPoint points[1];
[currentPath elementAtIndex:n associatedPoints:points];
//line equations in Ax + By = C form
float _A_FOO = (aggregateY/[currentPath elementCount]) - point.y;
float _B_FOO = point.x - (aggregateX/[currentPath elementCount]);
float _C_FOO = (_A_FOO * point.x) + (_B_FOO * point.y);
float _A_BAR = olderY - points[0].y;
float _B_BAR = points[0].x - olderX;
float _C_BAR = (_A_BAR * olderX) + (_B_BAR * olderY);
float det = (_A_FOO * _B_BAR) - (_A_BAR * _B_FOO);
if (det != 0) {
//intersection points with the edges
float xIntersectionPoint = ((_B_BAR * _C_FOO) - (_B_FOO * _C_BAR)) / det;
float yIntersectionPoint = ((_A_FOO * _C_BAR) - (_A_BAR * _C_FOO)) / det;
interPoint = NSMakePoint(xIntersectionPoint, yIntersectionPoint);
if (olderX <= points[0].x) {
//doesn't matter in which direction the ray goes, so I send it right-ward.
if ((interPoint.x >= olderX && interPoint.x <= points[0].x) && (interPoint.x > point.x)) {
noOfIntersections++;
}
} else {
if ((interPoint.x >= points[0].x && interPoint.x <= olderX) && (interPoint.x > point.x)) {
noOfIntersections++;
}
}
}
olderX = points[0].x;
olderY = points[0].y;
}
if (noOfIntersections % 2 == 0) {
result = FALSE;
} else {
result = TRUE;
}
return result;
}
Obj-C version of nirg's answer with sample method for testing points. Nirg's answer worked well for me.
- (BOOL)isPointInPolygon:(NSArray *)vertices point:(CGPoint)test {
NSUInteger nvert = [vertices count];
NSInteger i, j, c = 0;
CGPoint verti, vertj;
for (i = 0, j = nvert-1; i < nvert; j = i++) {
verti = [(NSValue *)[vertices objectAtIndex:i] CGPointValue];
vertj = [(NSValue *)[vertices objectAtIndex:j] CGPointValue];
if (( (verti.y > test.y) != (vertj.y > test.y) ) &&
( test.x < ( vertj.x - verti.x ) * ( test.y - verti.y ) / ( vertj.y - verti.y ) + verti.x) )
c = !c;
}
return (c ? YES : NO);
}
- (void)testPoint {
NSArray *polygonVertices = [NSArray arrayWithObjects:
[NSValue valueWithCGPoint:CGPointMake(13.5, 41.5)],
[NSValue valueWithCGPoint:CGPointMake(42.5, 56.5)],
[NSValue valueWithCGPoint:CGPointMake(39.5, 69.5)],
[NSValue valueWithCGPoint:CGPointMake(42.5, 84.5)],
[NSValue valueWithCGPoint:CGPointMake(13.5, 100.0)],
[NSValue valueWithCGPoint:CGPointMake(6.0, 70.5)],
nil
];
CGPoint tappedPoint = CGPointMake(23.0, 70.0);
if ([self isPointInPolygon:polygonVertices point:tappedPoint]) {
NSLog(#"YES");
} else {
NSLog(#"NO");
}
}
There is nothing more beutiful than an inductive definition of a problem. For the sake of completeness here you have a version in prolog which might also clarify the thoughs behind ray casting:
Based on the simulation of simplicity algorithm in http://www.ecse.rpi.edu/Homepages/wrf/Research/Short_Notes/pnpoly.html
Some helper predicates:
exor(A,B):- \+A,B;A,\+B.
in_range(Coordinate,CA,CB) :- exor((CA>Coordinate),(CB>Coordinate)).
inside(false).
inside(_,[_|[]]).
inside(X:Y, [X1:Y1,X2:Y2|R]) :- in_range(Y,Y1,Y2), X > ( ((X2-X1)*(Y-Y1))/(Y2-Y1) + X1),toggle_ray, inside(X:Y, [X2:Y2|R]); inside(X:Y, [X2:Y2|R]).
get_line(_,_,[]).
get_line([XA:YA,XB:YB],[X1:Y1,X2:Y2|R]):- [XA:YA,XB:YB]=[X1:Y1,X2:Y2]; get_line([XA:YA,XB:YB],[X2:Y2|R]).
The equation of a line given 2 points A and B (Line(A,B)) is:
(YB-YA)
Y - YA = ------- * (X - XA)
(XB-YB)
It is important that the direction of rotation for the line is
setted to clock-wise for boundaries and anti-clock-wise for holes.
We are going to check whether the point (X,Y), i.e the tested point is at the left
half-plane of our line (it is a matter of taste, it could also be
the right side, but also the direction of boundaries lines has to be changed in
that case), this is to project the ray from the point to the right (or left)
and acknowledge the intersection with the line. We have chosen to project
the ray in the horizontal direction (again it is a matter of taste,
it could also be done in vertical with similar restrictions), so we have:
(XB-XA)
X < ------- * (Y - YA) + XA
(YB-YA)
Now we need to know if the point is at the left (or right) side of
the line segment only, not the entire plane, so we need to
restrict the search only to this segment, but this is easy since
to be inside the segment only one point in the line can be higher
than Y in the vertical axis. As this is a stronger restriction it
needs to be the first to check, so we take first only those lines
meeting this requirement and then check its possition. By the Jordan
Curve theorem any ray projected to a polygon must intersect at an
even number of lines. So we are done, we will throw the ray to the
right and then everytime it intersects a line, toggle its state.
However in our implementation we are goint to check the lenght of
the bag of solutions meeting the given restrictions and decide the
innership upon it. for each line in the polygon this have to be done.
is_left_half_plane(_,[],[],_).
is_left_half_plane(X:Y,[XA:YA,XB:YB], [[X1:Y1,X2:Y2]|R], Test) :- [XA:YA, XB:YB] = [X1:Y1, X2:Y2], call(Test, X , (((XB - XA) * (Y - YA)) / (YB - YA) + XA));
is_left_half_plane(X:Y, [XA:YA, XB:YB], R, Test).
in_y_range_at_poly(Y,[XA:YA,XB:YB],Polygon) :- get_line([XA:YA,XB:YB],Polygon), in_range(Y,YA,YB).
all_in_range(Coordinate,Polygon,Lines) :- aggregate(bag(Line), in_y_range_at_poly(Coordinate,Line,Polygon), Lines).
traverses_ray(X:Y, Lines, Count) :- aggregate(bag(Line), is_left_half_plane(X:Y, Line, Lines, <), IntersectingLines), length(IntersectingLines, Count).
% This is the entry point predicate
inside_poly(X:Y,Polygon,Answer) :- all_in_range(Y,Polygon,Lines), traverses_ray(X:Y, Lines, Count), (1 is mod(Count,2)->Answer=inside;Answer=outside).
I've made a Python implementation of nirg's c++ code:
Inputs
bounding_points: nodes that make up the polygon.
bounding_box_positions: candidate points to filter. (In my implementation created from the bounding box.
(The inputs are lists of tuples in the format: [(xcord, ycord), ...])
Returns
All the points that are inside the polygon.
def polygon_ray_casting(self, bounding_points, bounding_box_positions):
# Arrays containing the x- and y-coordinates of the polygon's vertices.
vertx = [point[0] for point in bounding_points]
verty = [point[1] for point in bounding_points]
# Number of vertices in the polygon
nvert = len(bounding_points)
# Points that are inside
points_inside = []
# For every candidate position within the bounding box
for idx, pos in enumerate(bounding_box_positions):
testx, testy = (pos[0], pos[1])
c = 0
for i in range(0, nvert):
j = i - 1 if i != 0 else nvert - 1
if( ((verty[i] > testy ) != (verty[j] > testy)) and
(testx < (vertx[j] - vertx[i]) * (testy - verty[i]) / (verty[j] - verty[i]) + vertx[i]) ):
c += 1
# If odd, that means that we are inside the polygon
if c % 2 == 1:
points_inside.append(pos)
return points_inside
Again, the idea is taken from here
C# version of nirg's answer is here: I'll just share the code. It may save someone some time.
public static bool IsPointInPolygon(IList<Point> polygon, Point testPoint) {
bool result = false;
int j = polygon.Count() - 1;
for (int i = 0; i < polygon.Count(); i++) {
if (polygon[i].Y < testPoint.Y && polygon[j].Y >= testPoint.Y || polygon[j].Y < testPoint.Y && polygon[i].Y >= testPoint.Y) {
if (polygon[i].X + (testPoint.Y - polygon[i].Y) / (polygon[j].Y - polygon[i].Y) * (polygon[j].X - polygon[i].X) < testPoint.X) {
result = !result;
}
}
j = i;
}
return result;
}
VBA VERSION:
Note: Remember that if your polygon is an area within a map that Latitude/Longitude are Y/X values as opposed to X/Y (Latitude = Y, Longitude = X) due to from what I understand are historical implications from way back when Longitude was not a measurement.
CLASS MODULE: CPoint
Private pXValue As Double
Private pYValue As Double
'''''X Value Property'''''
Public Property Get X() As Double
X = pXValue
End Property
Public Property Let X(Value As Double)
pXValue = Value
End Property
'''''Y Value Property'''''
Public Property Get Y() As Double
Y = pYValue
End Property
Public Property Let Y(Value As Double)
pYValue = Value
End Property
MODULE:
Public Function isPointInPolygon(p As CPoint, polygon() As CPoint) As Boolean
Dim i As Integer
Dim j As Integer
Dim q As Object
Dim minX As Double
Dim maxX As Double
Dim minY As Double
Dim maxY As Double
minX = polygon(0).X
maxX = polygon(0).X
minY = polygon(0).Y
maxY = polygon(0).Y
For i = 1 To UBound(polygon)
Set q = polygon(i)
minX = vbMin(q.X, minX)
maxX = vbMax(q.X, maxX)
minY = vbMin(q.Y, minY)
maxY = vbMax(q.Y, maxY)
Next i
If p.X < minX Or p.X > maxX Or p.Y < minY Or p.Y > maxY Then
isPointInPolygon = False
Exit Function
End If
' SOURCE: http://www.ecse.rpi.edu/Homepages/wrf/Research/Short_Notes/pnpoly.html
isPointInPolygon = False
i = 0
j = UBound(polygon)
Do While i < UBound(polygon) + 1
If (polygon(i).Y > p.Y) Then
If (polygon(j).Y < p.Y) Then
If p.X < (polygon(j).X - polygon(i).X) * (p.Y - polygon(i).Y) / (polygon(j).Y - polygon(i).Y) + polygon(i).X Then
isPointInPolygon = True
Exit Function
End If
End If
ElseIf (polygon(i).Y < p.Y) Then
If (polygon(j).Y > p.Y) Then
If p.X < (polygon(j).X - polygon(i).X) * (p.Y - polygon(i).Y) / (polygon(j).Y - polygon(i).Y) + polygon(i).X Then
isPointInPolygon = True
Exit Function
End If
End If
End If
j = i
i = i + 1
Loop
End Function
Function vbMax(n1, n2) As Double
vbMax = IIf(n1 > n2, n1, n2)
End Function
Function vbMin(n1, n2) As Double
vbMin = IIf(n1 > n2, n2, n1)
End Function
Sub TestPointInPolygon()
Dim i As Integer
Dim InPolygon As Boolean
' MARKER Object
Dim p As CPoint
Set p = New CPoint
p.X = <ENTER X VALUE HERE>
p.Y = <ENTER Y VALUE HERE>
' POLYGON OBJECT
Dim polygon() As CPoint
ReDim polygon(<ENTER VALUE HERE>) 'Amount of vertices in polygon - 1
For i = 0 To <ENTER VALUE HERE> 'Same value as above
Set polygon(i) = New CPoint
polygon(i).X = <ASSIGN X VALUE HERE> 'Source a list of values that can be looped through
polgyon(i).Y = <ASSIGN Y VALUE HERE> 'Source a list of values that can be looped through
Next i
InPolygon = isPointInPolygon(p, polygon)
MsgBox InPolygon
End Sub
.Net port:
static void Main(string[] args)
{
Console.Write("Hola");
List<double> vertx = new List<double>();
List<double> verty = new List<double>();
int i, j, c = 0;
vertx.Add(1);
vertx.Add(2);
vertx.Add(1);
vertx.Add(4);
vertx.Add(4);
vertx.Add(1);
verty.Add(1);
verty.Add(2);
verty.Add(4);
verty.Add(4);
verty.Add(1);
verty.Add(1);
int nvert = 6; //VĂ©rtices del poligono
double testx = 2;
double testy = 5;
for (i = 0, j = nvert - 1; i < nvert; j = i++)
{
if (((verty[i] > testy) != (verty[j] > testy)) &&
(testx < (vertx[j] - vertx[i]) * (testy - verty[i]) / (verty[j] - verty[i]) + vertx[i]))
c = 1;
}
}
Surprised nobody brought this up earlier, but for the pragmatists requiring a database: MongoDB has excellent support for Geo queries including this one.
What you are looking for is:
db.neighborhoods.findOne({ geometry: { $geoIntersects: { $geometry: {
type: "Point", coordinates: [ "longitude", "latitude" ] } } }
})
Neighborhoods is the collection that stores one or more polygons in standard GeoJson format. If the query returns null it is not intersected otherwise it is.
Very well documented here:
https://docs.mongodb.com/manual/tutorial/geospatial-tutorial/
The performance for more than 6,000 points classified in a 330 irregular polygon grid was less than one minute with no optimization at all and including the time to update documents with their respective polygon.
Heres a point in polygon test in C that isn't using ray-casting. And it can work for overlapping areas (self intersections), see the use_holes argument.
/* math lib (defined below) */
static float dot_v2v2(const float a[2], const float b[2]);
static float angle_signed_v2v2(const float v1[2], const float v2[2]);
static void copy_v2_v2(float r[2], const float a[2]);
/* intersection function */
bool isect_point_poly_v2(const float pt[2], const float verts[][2], const unsigned int nr,
const bool use_holes)
{
/* we do the angle rule, define that all added angles should be about zero or (2 * PI) */
float angletot = 0.0;
float fp1[2], fp2[2];
unsigned int i;
const float *p1, *p2;
p1 = verts[nr - 1];
/* first vector */
fp1[0] = p1[0] - pt[0];
fp1[1] = p1[1] - pt[1];
for (i = 0; i < nr; i++) {
p2 = verts[i];
/* second vector */
fp2[0] = p2[0] - pt[0];
fp2[1] = p2[1] - pt[1];
/* dot and angle and cross */
angletot += angle_signed_v2v2(fp1, fp2);
/* circulate */
copy_v2_v2(fp1, fp2);
p1 = p2;
}
angletot = fabsf(angletot);
if (use_holes) {
const float nested = floorf((angletot / (float)(M_PI * 2.0)) + 0.00001f);
angletot -= nested * (float)(M_PI * 2.0);
return (angletot > 4.0f) != ((int)nested % 2);
}
else {
return (angletot > 4.0f);
}
}
/* math lib */
static float dot_v2v2(const float a[2], const float b[2])
{
return a[0] * b[0] + a[1] * b[1];
}
static float angle_signed_v2v2(const float v1[2], const float v2[2])
{
const float perp_dot = (v1[1] * v2[0]) - (v1[0] * v2[1]);
return atan2f(perp_dot, dot_v2v2(v1, v2));
}
static void copy_v2_v2(float r[2], const float a[2])
{
r[0] = a[0];
r[1] = a[1];
}
Note: this is one of the less optimal methods since it includes a lot of calls to atan2f, but it may be of interest to developers reading this thread (in my tests its ~23x slower then using the line intersection method).
If you're using Google Map SDK and want to check if a point is inside a polygon, you can try to use GMSGeometryContainsLocation. It works great!! Here is how that works,
if GMSGeometryContainsLocation(point, polygon, true) {
print("Inside this polygon.")
} else {
print("outside this polygon")
}
Here is the reference: https://developers.google.com/maps/documentation/ios-sdk/reference/group___geometry_utils#gaba958d3776d49213404af249419d0ffd
This is a presumably slightly less optimized version of the C code from here which was sourced from this page.
My C++ version uses a std::vector<std::pair<double, double>> and two doubles as an x and y. The logic should be exactly the same as the original C code, but I find mine easier to read. I can't speak for the performance.
bool point_in_poly(std::vector<std::pair<double, double>>& verts, double point_x, double point_y)
{
bool in_poly = false;
auto num_verts = verts.size();
for (int i = 0, j = num_verts - 1; i < num_verts; j = i++) {
double x1 = verts[i].first;
double y1 = verts[i].second;
double x2 = verts[j].first;
double y2 = verts[j].second;
if (((y1 > point_y) != (y2 > point_y)) &&
(point_x < (x2 - x1) * (point_y - y1) / (y2 - y1) + x1))
in_poly = !in_poly;
}
return in_poly;
}
The original C code is
int pnpoly(int nvert, float *vertx, float *verty, float testx, float testy)
{
int i, j, c = 0;
for (i = 0, j = nvert-1; i < nvert; j = i++) {
if ( ((verty[i]>testy) != (verty[j]>testy)) &&
(testx < (vertx[j]-vertx[i]) * (testy-verty[i]) / (verty[j]-verty[i]) + vertx[i]) )
c = !c;
}
return c;
}
Yet another numpyic implementation which I believe is the most concise one out of all the answers so far.
For example, let's say we have a polygon with polygon hollows that looks like this:
The 2D coordinates for the vertices of the large polygon are
[[139, 483], [227, 792], [482, 849], [523, 670], [352, 330]]
The coordinates for the vertices of the square hollow are
[[248, 518], [336, 510], [341, 614], [250, 620]]
The coordinates for the vertices of the triangle hollow are
[[416, 531], [505, 517], [495, 616]]
Say we want to test two points [296, 557] and [422, 730] if they are within the red area (excluding the edges). If we locate the two points, it will look like this:
Obviously, [296, 557] is not inside the read area, whereas [422, 730] is.
My solution is based on the winding number algorithm. Below is my 4-line python code using only numpy:
def detect(points, *polygons):
import numpy as np
endpoint1 = np.r_[tuple(np.roll(p, 1, 0) for p in polygons)][:, None] - points
endpoint2 = np.r_[polygons][:, None] - points
p1, p2 = np.cross(endpoint1, endpoint2), np.einsum('...i,...i', endpoint1, endpoint2)
return ~((p1.sum(0) < 0) ^ (abs(np.arctan2(p1, p2).sum(0)) > np.pi) | ((p1 == 0) & (p2 <= 0)).any(0))
To test the implementation:
points = [[296, 557], [422, 730]]
polygon1 = [[139, 483], [227, 792], [482, 849], [523, 670], [352, 330]]
polygon2 = [[248, 518], [336, 510], [341, 614], [250, 620]]
polygon3 = [[416, 531], [505, 517], [495, 616]]
print(detect(points, polygon1, polygon2, polygon3))
Output:
[False True]
For Detecting hit on Polygon we need to test two things:
If Point is inside polygon area. (can be accomplished by Ray-Casting Algorithm)
If Point is on the polygon border(can be accomplished by same algorithm which is used for point detection on polyline(line)).
To deal with the following special cases in Ray casting algorithm:
The ray overlaps one of the polygon's side.
The point is inside of the polygon and the ray passes through a vertex of the polygon.
The point is outside of the polygon and the ray just touches one of the polygon's angle.
Check Determining Whether A Point Is Inside A Complex Polygon. The article provides an easy way to resolve them so there will be no special treatment required for the above cases.
You can do this by checking if the area formed by connecting the desired point to the vertices of your polygon matches the area of the polygon itself.
Or you could check if the sum of the inner angles from your point to each pair of two consecutive polygon vertices to your check point sums to 360, but I have the feeling that the first option is quicker because it doesn't involve divisions nor calculations of inverse of trigonometric functions.
I don't know what happens if your polygon has a hole inside it but it seems to me that the main idea can be adapted to this situation
You can as well post the question in a math community. I bet they have one million ways of doing that
If you are looking for a java-script library there's a javascript google maps v3 extension for the Polygon class to detect whether or not a point resides within it.
var polygon = new google.maps.Polygon([], "#000000", 1, 1, "#336699", 0.3);
var isWithinPolygon = polygon.containsLatLng(40, -90);
Google Extention Github

Resources