How can I check wether a point is inside the circumcircle of 3 points? - algorithm

Is there any easy solution? Or has anybody an example of an implementation?
Thanks, Jonas

Lets call
a, b, c our three points,
C the circumcircle of (a, b, c)
and d an other point.
A fast way to determine if d is in C is to compute this determinant:
| ax-dx, ay-dy, (ax-dx)² + (ay-dy)² |
det = | bx-dx, by-dy, (bx-dx)² + (by-dy)² |
| cx-dx, cy-dy, (cx-dx)² + (cy-dy)² |
if a, b, c are in counter clockwise order then:
if det equal 0 then d is on C
if det > 0 then d is inside C
if det < 0 then d is outside C
here is a javascript function that does just that:
function inCircle (ax, ay, bx, by, cx, cy, dx, dy) {
let ax_ = ax-dx;
let ay_ = ay-dy;
let bx_ = bx-dx;
let by_ = by-dy;
let cx_ = cx-dx;
let cy_ = cy-dy;
return (
(ax_*ax_ + ay_*ay_) * (bx_*cy_-cx_*by_) -
(bx_*bx_ + by_*by_) * (ax_*cy_-cx_*ay_) +
(cx_*cx_ + cy_*cy_) * (ax_*by_-bx_*ay_)
) > 0;
}
You might also need to check if your points are in counter clockwise order:
function ccw (ax, ay, bx, by, cx, cy) {
return (bx - ax)*(cy - ay)-(cx - ax)*(by - ay) > 0;
}
I didn't place the ccw check inside the inCircle function because you shouldn't check it every time.
This process doesn't take any divisions or square root operation.
You can see the code in action there: https://titouant.github.io/testTriangle/
And the source there: https://github.com/TitouanT/testTriangle

(In case you are interested in a non-obvious/"crazy" kind of solution.)
One equivalent property of Delaunay triangulation is as follows: if you build a circumcircle of any triangle in the triangulation, it is guaranteed not to contain any other vertices of the triangulation.
Another equivalent property of Delaunay triangulation is: it maximizes the minimal triangle angle (i.e. maximizes it among all triangulations on the same set of points).
This suggests an algorithm for your test:
Consider triangle ABC built on the original 3 points.
If the test point P lies inside the triangle it is definitely inside the circle
If the test point P belongs to one of the "corner" regions (see the shaded regions in the picture below), it is definitely outside the circle
Otherwise (let's say P lies in region 1) consider two triangulations of quadrilateral ABCP: the original one contains the original triangle (red diagonal), and the alternate one with "flipped" diagonal (blue diagonal)
Determine which one if this triangulations is a Delaunay triangulation by testing the "flip" condition, e.g. by comparing α = min(∠1,∠4,∠5,∠8) vs. β = min(∠2,∠3,∠6,∠7).
If the original triangulation is a Delaunay triangulation (α > β), P lies outside the circle. If the alternate triangulation is a Delaunay triangulation (α < β), P lies inside the circle.
Done.
(Revisiting this answer after a while.)
This solution might not be as "crazy" as it might appear at the first sight. Note that in order to compare angles at steps 5 and 6 there's no need to calculate the actual angle values. It is sufficient to know their cosines (i.e. there's no need to involve trigonometric functions).
A C++ version of the code
#include <cmath>
#include <array>
#include <algorithm>
struct pnt_t
{
int x, y;
pnt_t ccw90() const
{ return { -y, x }; }
double length() const
{ return std::hypot(x, y); }
pnt_t &operator -=(const pnt_t &rhs)
{
x -= rhs.x;
y -= rhs.y;
return *this;
}
friend pnt_t operator -(const pnt_t &lhs, const pnt_t &rhs)
{ return pnt_t(lhs) -= rhs; }
friend int operator *(const pnt_t &lhs, const pnt_t &rhs)
{ return lhs.x * rhs.x + lhs.y * rhs.y; }
};
int side(const pnt_t &a, const pnt_t &b, const pnt_t &p)
{
int cp = (b - a).ccw90() * (p - a);
return (cp > 0) - (cp < 0);
}
void make_ccw(std::array<pnt_t, 3> &t)
{
if (side(t[0], t[1], t[2]) < 0)
std::swap(t[0], t[1]);
}
double ncos(pnt_t a, const pnt_t &o, pnt_t b)
{
a -= o;
b -= o;
return -(a * b) / (a.length() * b.length());
}
bool inside_circle(std::array<pnt_t, 3> t, const pnt_t &p)
{
make_ccw(t);
std::array<int, 3> s =
{ side(t[0], t[1], p), side(t[1], t[2], p), side(t[2], t[0], p) };
unsigned outside = std::count(std::begin(s), std::end(s), -1);
if (outside != 1)
return outside == 0;
while (s[0] >= 0)
{
std::rotate(std::begin(t), std::begin(t) + 1, std::end(t));
std::rotate(std::begin(s), std::begin(s) + 1, std::end(s));
}
double
min_org = std::min({
ncos(t[0], t[1], t[2]), ncos(t[2], t[0], t[1]),
ncos(t[1], t[0], p), ncos(p, t[1], t[0]) }),
min_alt = std::min({
ncos(t[1], t[2], p), ncos(p, t[2], t[0]),
ncos(t[0], p, t[2]), ncos(t[2], p, t[1]) });
return min_org <= min_alt;
}
and a couple of tests with arbitrarily chosen triangles and a large number of random points
Of course, the whole thing can be easily reformulated without even mentioning Delaunay triangulations. Starting from step 4 this solution is based in the property of the opposite angles of cyclic quadrilateral, which must sum to 180°.

In this Math SE post of mine I included an equation which checks if four points are cocircular by computing a 4×4 determinant. By turning that equation into an inequality you can check for insideness.
If you want to know which direction the inequality has to go, conisder the case of a point very far away. In this case, the x²+y² term will dominate all other terms. So you can simply assume that for the point in question, this term is one while the three others are zero. Then pick the sign of your inequality so this value does not satisfy it, since this point is definitely outside but you want to characterize inside.
If numeric precision is an issue, this page by Prof. Shewchuk describes how to obtain consistent predicates for points expressed using regular double precision floating point numbers.

Given 3 points (x1,y1),(x2,y2),(x3,y3) and the point you want to check is inside the circle defined by the above 3 points (x,y) you can do something like
/**
*
* #param x coordinate of point want to check if inside
* #param y coordinate of point want to check if inside
* #param cx center x
* #param cy center y
* #param r radius of circle
* #return whether (x,y) is inside circle
*/
static boolean g(double x,double y,double cx,double cy,double r){
return Math.sqrt((x-cx)*(x-cx)+(y-cy)*(y-cy))<r;
}
// check if (x,y) is inside circle defined by (x1,y1),(x2,y2),(x3,y3)
static boolean isInside(double x,double y,double x1,double y1,double x2,double y2,double x3,double y3){
double m1 = (x1-x2)/(y2-y1);
double m2 = (x1-x3)/(y3-y1);
double b1 = ((y1+y2)/2) - m1*(x1+x2)/2;
double b2 = ((y1+y3)/2) - m2*(x1+x3)/2;
double xx = (b2-b1)/(m1-m2);
double yy = m1*xx + b1;
return g(x,y,xx,yy,Math.sqrt((xx-x1)*(xx-x1)+(yy-y1)*(yy-y1)));
}
public static void main(String[] args) {
// if (0,1) is inside the circle defined by (0,0),(0,2),(1,1)
System.out.println(isInside(0,1,0,0,0,2,1,1));
}
The method for getting an expression for the center of circle from 3 points goes from finding the intersection of the 2 perpendicular bisectors of 2 line segments, above I chose (x1,y1)-(x2,y2) and (x1,y1)-(x3,y3). Since you know a point on each perpendicular bisector, namely (x1+x2)/2 and (x1+x3)/2, and since you also know the slope of each perpendicular bisector, namely (x1-x2)/(y2-y1) and (x1-x3)/(y3-y1) from the above 2 line segments respectively, you can solve for the (x,y) where they intersect.

Related

map range of IEEE 32bit float [1:2) to some arbitrary [a:b)

Back story : uniform PRNG with arbitrary endpoints
I've got a fast uniform pseudo random number generator that creates uniform float32 numbers in range [1:2) i.e. u : 1 <= u <= 2-eps. Unfortunately mapping the endpoints [1:2) to that of an arbitrary range [a:b) is non-trivial in floating point math. I'd like to exactly match the endpoints with a simple affine calculation.
Formally stated
I want to make an IEEE-754 32 bit floating point affine function f(x,a,b) for 1<=x<2 and arbitrary a,b that exactly maps
1 -> a and nextlower(2) -> nextlower(b)
where nextlower(q) is the next lower FP representable number (e.g. in C++ std::nextafter(float(q),float(q-1)))
What I've tried
The simple mapping f(x,a,b) = (x-1)*(b-a) + a always achieves the f(1) condition but sometimes fails the f(2) condition due to floating point rounding.
I've tried replacing the 1 with a free design parameter to cancel FP errors in the spirit of Kahan summation.
i.e. with
f(x,c0,c1,c2) = (x-c0)*c1 + c2
one mathematical solution is c0=1,c1=(b-a),c2=a (the simple mapping above),
but the extra parameter lets me play around with constants c0,c1,c2 to match the endpoints. I'm not sure I understand the principles behind Kahan summation well enough to apply them to determine the parameters or even be confident a solution exists. It feels like I'm bumping around in the dark where others might've found the light already.
Aside: I'm fine assuming the following
a < b
both a and b are far from zero, i.e. OK to ignore subnormals
a and b are far enough apart (measuered in representable FP values) to mitigate non-uniform quantization and avoid degenerate cases
Update
I'm using a modified form of Chux's answer to avoid the division.
While I'm not 100% certain my refactoring kept all the magic, it does still work in all my test cases.
float lerp12(float x,float a,float b)
{
const float scale = 1.0000001f;
// scale = 1/(nextlower(2) - 1);
const float ascale = a*scale;
const float bscale = nextlower(b)*scale;
return (nextlower(2) - x)*ascale + (x - 1.0f)*bscale;
}
Note that only the last line (5 FLOPS) depends on x, so the others can be reused if (a,b) remain the same.
OP's goal
I want to make an IEEE-754 32 bit floating point affine function f(x,a,b) for 1<=x<2 and arbitrary a,b that exactly maps 1 -> a and nextlower(2) -> nextlower(b)
This differs slightly from "map range of IEEE 32bit float [1:2) to some arbitrary [a:b)".
General case
Map x0 to y0, x1 to y1 and various x in-between to y :
m = (y1 - y0)/(x1 - x0);
y = m*(x - x0) + y0;
OP's case
// x0 = 1.0f;
// x1 = nextafterf(2.0f, 1.0f);
// y0 = a;
// y1 = nextafterf(b, a);
#include <math.h> // for nextafterf()
float x = random_number_1_to_almost_2();
float m = (nextafterf(b, a) - a)/(nextafterf(2.0f, 1.0f) - 1.0f);
float y = m*(x - 1.0f) + a;
nextafterf(2.0f, 1.0f) - 1.0f, x - 1.0f and nextafterf(b, a) are exact, incurring no calculation error.
nextafterf(2.0f, 1.0f) - 1.0f is a value a little less than 1.0f.
Recommendation
Other re-formations are possible with better symmetry and numerical stability at the end-points.
float x = random_number_1_to_almost_2();
float afactor = nextafterf(2.0f, 1.0f) - x; // exact
float bfactor = x - 1.0f; // exact
float xwidth = nextafterf(2.0f, 1.0f) - 1.0f; // exact
// Do not re-order next line of code, perform 2 divisions
float y = (afactor/xwidth)*a + (bfactor/xwidth)*nextafterf(b, a);
Notice afactor/xwidth and bfactor/xwidth are both exactly 0.0 or 1.0 at the end-points, thus meeting "maps 1 -> a and nextlower(2) -> nextlower(b)". Extended precision not needed.
OP's (x-c0)*c1 + c2 has trouble as it divides (x-c0)*c1 by (2.0 - 1.0) or 1.0 (implied), when it should divide by nextafterf(2.0f, 1.0f) - 1.0f.
Simple lerping based on fused multiply-add can reliably hit the endpoints for interpolation factors 0 and 1. For x in [1, 2) the interpolation factor x - 1 does not reach unity, which can be fixed by slight stretching by multiplying x-1 with (2.0f / nextlower(2.0f)). Obviously the endpoint needs to also be adjusted to the endpoint nextlower(b). For the C code below I have used the definition of nextlower() provided in the question, which may not be what asker desires, since for floating-point q sufficiently large in magnitude, q == (q - 1).
Asker stated in comments that it is understood that this kind of mapping is not going to result in an exactly uniform distribution of the pseudo-random numbers in the interval [a, b), only approximately so, and that pathological mappings may occur when a and b are extremely close together. I have not mathematically proved that the implementation of map() below guarantees the desired behavior, but it seems to do so for a large number of random test cases.
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#include <string.h>
#include <math.h>
float nextlowerf (float q)
{
return nextafterf (q, q - 1);
}
float map (float a, float b, float x)
{
float t = (x - 1.0f) * (2.0f / nextlowerf (2.0f));
return fmaf (t, nextlowerf (b), fmaf (-t, a, a));
}
float uint32_as_float (uint32_t a)
{
float r;
memcpy (&r, &a, sizeof(r));
return r;
}
// George Marsaglia's KISS PRNG, period 2**123. Newsgroup sci.math, 21 Jan 1999
// Bug fix: Greg Rose, "KISS: A Bit Too Simple" http://eprint.iacr.org/2011/007
static uint32_t kiss_z=362436069, kiss_w=521288629;
static uint32_t kiss_jsr=123456789, kiss_jcong=380116160;
#define znew (kiss_z=36969*(kiss_z&65535)+(kiss_z>>16))
#define wnew (kiss_w=18000*(kiss_w&65535)+(kiss_w>>16))
#define MWC ((znew<<16)+wnew )
#define SHR3 (kiss_jsr^=(kiss_jsr<<13),kiss_jsr^=(kiss_jsr>>17), \
kiss_jsr^=(kiss_jsr<<5))
#define CONG (kiss_jcong=69069*kiss_jcong+1234567)
#define KISS ((MWC^CONG)+SHR3)
int main (void)
{
float a, b, x, r;
float FP32_MIN_NORM = 0x1.000000p-126f;
float FP32_MAX_NORM = 0x1.fffffep+127f;
do {
do {
a = uint32_as_float (KISS);
} while ((fabsf (a) < FP32_MIN_NORM) || (fabsf (a) > FP32_MAX_NORM) || isnan (a));
do {
b = uint32_as_float (KISS);
} while ((fabsf (b) < FP32_MIN_NORM) || (fabsf (b) > FP32_MAX_NORM) || isnan (b) || (b < a));
x = 1.0f;
r = map (a, b, x);
if (r != a) {
printf ("lower bound failed: a=%12.6a b=%12.6a map=%12.6a\n", a, b, r);
return EXIT_FAILURE;
}
x = nextlowerf (2.0f);
r = map (a, b, x);
if (r != nextlowerf (b)) {
printf ("upper bound failed: a=%12.6a b=%12.6a map=%12.6a\n", a, b, r);
return EXIT_FAILURE;
}
} while (1);
return EXIT_SUCCESS;
}

What is the 'roughness constant' of this midpoint displacement algorithm, and how can I modify it?

I've taken code from "Midpoint displacement algorithm example", cleaned it up a bit, and resuited it to work as a 1D linear terrain generator. Below is my new version of the doMidpoint() method:
public boolean setMidpointDisplacement(int x1, int x2) {
// Exit recursion if points are next to eachother
if (x2 - x1 < 2) {
return false;
}
final int midX = (x1 + x2) / 2;
final int dist = x2 - x1;
final int distHalf = dist / 2;
final int y1 = map[x1];
final int y2 = map[x2];
final int delta = random.nextInt(dist) - distHalf; // +/- half the distance
final int sum = y1 + y2;
map[midX] = (sum + delta) / 2; // Sets the midpoint
// Divide and repeat
setMidpointDisplacement(x1, midX);
setMidpointDisplacement(midX, x2);
return true;
}
The code seems to work well and produces workable terrain (you can see how I've tested it, with a rudimentary GUI)
After reading "Generating Random Fractal Terrain" and "Mid Point Displacement Algorithm", my question is:
How can I identify the 'roughness constant' implicitly utilized by this code? And then, how can I change it?
Additionally, and this may or may not be directly related to my major question, but I've noticed that the code adds the sum of the y-values to the "delta" (change amount) and divides this by 2 -- although this is the same as averaging the sum and then adding delta/2. Does this have any bearing on the 'roughness constant'? I'm thinking that I could do
map[midX] = sum/2 + delta/K;
and K would now be representative of the 'roughness constant', but I'm not sure if this is accurate or not, since it seems to allow me to control smoothing but doesn't directly control "how much the random number range is reduced each time through the loop" as defined by "Generating Random Fractal Terrain".
Like I've said before, I ported the 2D MDP noise generator I found into a 1D version -- but I'm fairly certain I did it accurately, so that is not the source of any problems.
How can I identify the 'roughness constant' implicitly utilized by this code?
In the cited, roughness is the amount you diminish the max random displacement. As your displacement is random.nextInt(dist) = dist*random.nextDouble(), your dist = x2-x1 and you go from one recursion step to the other with half of this dist, it follows that the roughness == 1 (in the cited terminology)
And then, how can I change it?
public boolean setMidpointDisplacement(int x1, int x2, int roughness) {
// Exit recursion if points are next to eachother
if (x2 - x1 < 2) {
return false;
}
// this is 2^-roughness as per cited
// you can pass it precalculated as a param, using it as such here
// is only to put it into a relation with the cited
double factor=1.0/(1<<roughness);
final int midX = (x1 + x2) / 2;
final int dist = x2 - x1;
final int distHalf = dist / 2;
final int y1 = map[x1];
final int y2 = map[x2];
// and you apply it here. A cast will be necessary though
final int delta = factor*(random.nextInt(dist) - distHalf); // +/- half the distance
final int sum = y1 + y2;
map[midX] = (sum + delta) / 2; // Sets the midpoint
// Divide and repeat
setMidpointDisplacement(x1, midX, roughness);
setMidpointDisplacement(midX, x2, roughness);
return true;
}
Additionally, and this may or may not be directly related to my major question, but I've noticed that the code adds the sum of the y-values to the "delta" (change amount) and divides this by 2
Their way has the advantage of doing it with a single division. As you work with ints, the accumulated truncation errors will be smaller with a single div (not to mention slightly faster).

Are there any numerically stable versions of the centroid finding algorithm for polygons?

Say I have an almost degenerate 2-D polygon such as:
[[40.802,9.289],[40.875,9.394],[40.910000000000004,9.445],[40.911,9.446],[40.802,9.289]]
For reference this looks like:
If I use the standard centroid algorithm as shown on Wikipedia, for example this python code:
pts = [[40.802,9.289],[40.875,9.394],[40.910000000000004,9.445], [40.911,9.446],[40.802,9.289]]
a = 0.0
c = [0.0, 0.0]
for i in range(0,4):
k = pts[i][0] * pts[i + 1][1] - pts[i + 1][0] * pts[i][1]
a += k
c = [c[0] + k * (pts[i][0] + pts[i + 1][0]), c[1] + k * (pts[i][1] + pts[i + 1][1])]
c = [c[0] / (3 * a), c[1] / (3 * a)]
I get c = [-10133071.666666666, -14636692.583333334]. In other cases where a == 0.0 I might also get a divide by zero.
What I would ideally like is that in the worst case, the centroid is equal to one of the vertices or somewhere within the polygon, and that no arbitrary tolerances should be used for avoiding this situation. Is there some clever way to rewrite the equation to make it more numerically stable?
When the area is zero (or very close to zero, if you cannot afford to do exact arithmetic), probably the best option is to take perimeter centroid of the set of points.
Perimeter centroid is given by the ratio of the weighted sum of midpoints of each side of the polygon (the weight is the length of the corresponding side), to the perimeter of the polygon.
Using exact arithmetic, it is possible to calculate the centroid in this case.
red point is the perimeter centroid and the green one is the true centroid
I used sage to calculate the centroid exactly https://cloud.sagemath.com/projects/f3149cab-2b4b-494a-b795-06d62ae133dd/files/2016-08-17-102024.sagews.
People have been looking for a way to relate these points with respect to each other -- https://math.stackexchange.com/questions/1173903/centroids-of-a-polygon.
I don't think this formula can be easily made more stable towards nearly degenerated 2D polygons. The problem is that the calculation of the area (A) relies on subtracting trapezoidal shapes (see Paul Bourke). For very small areas you inevitably run into the numerical precision.
I see two possible solutions:
1.) You could check the area and if it gets below a threshold assume the polygon is degenerated and just take the mean of minimal and maximal x and y values (the middle of the line)
2.). Use floating point arithmetics with higher precision, maybe something like mpmath.
Btw. you have a mistake in your code. It should be:
c = [c[0] + k * (pts[i][0] + pts[i + 1][0]), c[1] + k * (pts[i][1] + pts[i + 1][1])]
However that doesn't make a difference.
I would say that the following is an authoritative C implementation for the computation of the centroid of a simple polygon, it is written by Joseph O'Rourke, author of the book Computational Geometry in C.
/*
Written by Joseph O'Rourke
orourke#cs.smith.edu
October 27, 1995
Computes the centroid (center of gravity) of an arbitrary
simple polygon via a weighted sum of signed triangle areas,
weighted by the centroid of each triangle.
Reads x,y coordinates from stdin.
NB: Assumes points are entered in ccw order!
E.g., input for square:
0 0
10 0
10 10
0 10
This solves Exercise 12, p.47, of my text,
Computational Geometry in C. See the book for an explanation
of why this works. Follow links from
http://cs.smith.edu/~orourke/
*/
#include <stdio.h>
#define DIM 2 /* Dimension of points */
typedef int tPointi[DIM]; /* type integer point */
typedef double tPointd[DIM]; /* type double point */
#define PMAX 1000 /* Max # of pts in polygon */
typedef tPointi tPolygoni[PMAX];/* type integer polygon */
int Area2( tPointi a, tPointi b, tPointi c );
void FindCG( int n, tPolygoni P, tPointd CG );
int ReadPoints( tPolygoni P );
void Centroid3( tPointi p1, tPointi p2, tPointi p3, tPointi c );
void PrintPoint( tPointd p );
int main()
{
int n;
tPolygoni P;
tPointd CG;
n = ReadPoints( P );
FindCG( n, P ,CG);
printf("The cg is ");
PrintPoint( CG );
}
/*
Returns twice the signed area of the triangle determined by a,b,c,
positive if a,b,c are oriented ccw, and negative if cw.
*/
int Area2( tPointi a, tPointi b, tPointi c )
{
return
(b[0] - a[0]) * (c[1] - a[1]) -
(c[0] - a[0]) * (b[1] - a[1]);
}
/*
Returns the cg in CG. Computes the weighted sum of
each triangle's area times its centroid. Twice area
and three times centroid is used to avoid division
until the last moment.
*/
void FindCG( int n, tPolygoni P, tPointd CG)
{
int i;
double A2, Areasum2 = 0; /* Partial area sum */
tPointi Cent3;
CG[0] = 0;
CG[1] = 0;
for (i = 1; i < n-1; i++) {
Centroid3( P[0], P[i], P[i+1], Cent3 );
A2 = Area2( P[0], P[i], P[i+1]);
CG[0] += A2 * Cent3[0];
CG[1] += A2 * Cent3[1];
Areasum2 += A2;
}
CG[0] /= 3 * Areasum2;
CG[1] /= 3 * Areasum2;
return;
}
/*
Returns three times the centroid. The factor of 3 is
left in to permit division to be avoided until later.
*/
void Centroid3( tPointi p1, tPointi p2, tPointi p3, tPointi c )
{
c[0] = p1[0] + p2[0] + p3[0];
c[1] = p1[1] + p2[1] + p3[1];
return;
}
void PrintPoint( tPointd p )
{
int i;
putchar('(');
for ( i=0; i<DIM; i++) {
printf("%f",p[i]);
if (i != DIM - 1) putchar(',');
}
putchar(')');
putchar('\n');
}
/*
Reads in the coordinates of the vertices of a polygon from stdin,
puts them into P, and returns n, the number of vertices.
The input is assumed to be pairs of whitespace-separated coordinates,
one pair per line. The number of points is not part of the input.
*/
int ReadPoints( tPolygoni P )
{
int n = 0;
printf("Polygon:\n");
printf(" i x y\n");
while ( (n < PMAX) &&
(scanf("%d %d",&P[n][0],&P[n][1]) != EOF) ) {
printf("%3d%4d%4d\n", n, P[n][0], P[n][1]);
++n;
}
if (n < PMAX)
printf("n = %3d vertices read\n",n);
else printf("Error in ReadPoints:\too many points; max is %d\n",
PMAX);
putchar('\n');
return n;
}
The code solves the Exercise 12 at page 47 of the first edition of the book, a brief explanation is here:
Subject 2.02: How can the centroid of a polygon be computed?
The centroid (a.k.a. the center of mass, or center of gravity)
of a polygon can be computed as the weighted sum of the centroids
of a partition of the polygon into triangles. The centroid of a
triangle is simply the average of its three vertices, i.e., it
has coordinates (x1 + x2 + x3)/3 and (y1 + y2 + y3)/3. This
suggests first triangulating the polygon, then forming a sum
of the centroids of each triangle, weighted by the area of
each triangle, the whole sum normalized by the total polygon area.
This indeed works, but there is a simpler method: the triangulation
need not be a partition, but rather can use positively and
negatively oriented triangles (with positive and negative areas),
as is used when computing the area of a polygon. This leads to
a very simple algorithm for computing the centroid, based on a
sum of triangle centroids weighted with their signed area.
The triangles can be taken to be those formed by any fixed point,
e.g., the vertex v0 of the polygon, and the two endpoints of
consecutive edges of the polygon: (v1,v2), (v2,v3), etc. The area
of a triangle with vertices a, b, c is half of this expression:
(b[X] - a[X]) * (c[Y] - a[Y]) -
(c[X] - a[X]) * (b[Y] - a[Y]);
Code available at ftp://cs.smith.edu/pub/code/centroid.c (3K).
Reference: [Gems IV] pp.3-6; also includes code.
I did not study this algorithm, nor I tested it, but at a first glance it seems to me it is slightly different from the Wikipedia one.
The code from the book Graphics Gems IV is here:
/*
* ANSI C code from the article
* "Centroid of a Polygon"
* by Gerard Bashein and Paul R. Detmer,
(gb#locke.hs.washington.edu, pdetmer#u.washington.edu)
* in "Graphics Gems IV", Academic Press, 1994
*/
/*********************************************************************
polyCentroid: Calculates the centroid (xCentroid, yCentroid) and area
of a polygon, given its vertices (x[0], y[0]) ... (x[n-1], y[n-1]). It
is assumed that the contour is closed, i.e., that the vertex following
(x[n-1], y[n-1]) is (x[0], y[0]). The algebraic sign of the area is
positive for counterclockwise ordering of vertices in x-y plane;
otherwise negative.
Returned values: 0 for normal execution; 1 if the polygon is
degenerate (number of vertices < 3); and 2 if area = 0 (and the
centroid is undefined).
**********************************************************************/
int polyCentroid(double x[], double y[], int n,
double *xCentroid, double *yCentroid, double *area)
{
register int i, j;
double ai, atmp = 0, xtmp = 0, ytmp = 0;
if (n < 3) return 1;
for (i = n-1, j = 0; j < n; i = j, j++)
{
ai = x[i] * y[j] - x[j] * y[i];
atmp += ai;
xtmp += (x[j] + x[i]) * ai;
ytmp += (y[j] + y[i]) * ai;
}
*area = atmp / 2;
if (atmp != 0)
{
*xCentroid = xtmp / (3 * atmp);
*yCentroid = ytmp / (3 * atmp);
return 0;
}
return 2;
}
CGAL allow you to use an exact multi-precision number type instead of double or float in order to get exact computations, it will cost an execution time overhead, the idea is decsribed in The Exact Computation Paradigm.
A commercial implementation claims to use the Green's theorem, I do not know whether it use a multi-precision number type:
The area and centroid are computed by applying Green's theorem using
only the points on the contour or polygon
I think it refers to the Wikipedia algorithm since the formulae in Wikipedia are an application of the Green's theorem as explained here.

How to determine whether a point is inside a 2D convex polygon in faster than N time

I know the standard ray casting algorithm for finding whether a point is inside any polygon. However, is there a faster method if you limit yourself to just a convex polygon?
Yes, you can use binary search. You do this by recursively cutting the polygon into a fraction of its size (i.e. half) and checking on which side you are. For example, you can start by checking whether you are on the positive or negative side on the line going through vertex 0 and vertex n/2. Once you have 3 vertices, you simply test versus the remaining two sides, completing the test versus that triangle.
Here's some pseudo-code, that will hopefully make this easier to understand:
function TestConvexPolygon(point, polygon)
if polygon.size == 3 then
return TestTriangle(point, polygon) // constant time
if (TestLine(point, polygon[0], polygon[polygon.size/2]) > 0)
return TestConvexPolygon(point, new polygon from polygon.size/2 to polygon.size-1 and 0)
else
return TestConvexPolygon(point, new polygon from 0 to polygon.size/2)
Another way to visualize the idea is that you can view the polygon as a triangle-fan. You then start by testing your point versus the median interior edge. That will eliminate half of the possible triangles from the fan. Since half a triangle fan is still a triangle fan, you can do this recursively until you only have one triangle left in your fan, which you then test explicitly.
A real implementation needs some index juggling, but is otherwise easy and robust.
As the answer stated, the algorithm is recursive. On each step you cut off the part of the polygon in which the point cannot be. Here is a C++ code:
#include "stdafx.h"
#include <vector>
#include <iostream>
struct vec2d {
double x, y;
vec2d(double _x, double _y) : x(_x), y(_y) {}
};
// Finds the cross product of the vectors: AB x BC
double crossProduct(vec2d pointA, vec2d pointB, vec2d pointC) {
vec2d vectorAB = vec2d(pointB.x - pointA.x, pointB.y - pointA.y);
vec2d vectorBC = vec2d(pointC.x - pointB.x, pointC.y - pointB.y);
return vectorAB.x * vectorBC.y - vectorBC.x * vectorAB.y;
}
// Finds area for the triangle ABC
double S(vec2d A, vec2d B, vec2d C) {
return crossProduct(A, B, C) / 2;
}
bool isPointInsideTriangle(vec2d A, vec2d B, vec2d C, vec2d point)
{
return S(A, B, point) >= 0 && S(B, C, point) >= 0 && S(C, A, point) >= 0;
}
bool isPointAboveLine(vec2d A, vec2d B, vec2d point)
{
return S(A, B, point) >= 0;
}
// O(logN), works only for convex polygons
bool isPointInsidePolygon(std::vector<vec2d> polygon, vec2d point) {
if (polygon.size() == 3) {
return isPointInsideTriangle(polygon[0], polygon[1], polygon[2], point);
}
if (isPointAboveLine(polygon[0], polygon[polygon.size() / 2], point)) {
std::vector<vec2d> polygonAbove(polygon.begin() + polygon.size() / 2, polygon.end());
polygonAbove.emplace(polygonAbove.begin(), polygon[0]);
return isPointInsidePolygon(polygonAbove, point);
}
else {
std::vector<vec2d> polygonBelow(polygon.begin(), polygon.begin() + polygon.size() / 2 + 1);
return isPointInsidePolygon(polygonBelow, point);
}
}
int main()
{
std::vector<vec2d> convexPolygon;
convexPolygon.push_back(vec2d(0, 2));
convexPolygon.push_back(vec2d(2, 0));
convexPolygon.push_back(vec2d(4, 1));
convexPolygon.push_back(vec2d(6, 3));
convexPolygon.push_back(vec2d(6, 4));
convexPolygon.push_back(vec2d(5, 6));
convexPolygon.push_back(vec2d(2, 6));
convexPolygon.push_back(vec2d(1, 4));
std::cout << isPointInsidePolygon(convexPolygon, vec2d(2, 5));
return 0;
}

Bresenham line algorithm - where does the decision parameter come from?

void line()
{
int x1 = 10, y1 = 10, x2 = 300, y2 = 500 , x, y;
int dx, dy, //deltas
e; // decision parameter
glClear(GL_COLOR_BUFFER_BIT);
glColor3f( 1 ,0, 0);
setPixel(x1, y1); //plot first point
// difference between starting and ending points
dx = x2 - x1;
dy = y2 - y1;
e = 2 * dy - dx;
x = x1; y = y1;
for(int k = 0; k < dx - 1; ++k)
{
if(e < 0)
{
//next pixel: (x+1, y)
e = e + 2*dy;
}
else
{
//next pixel: (x+1, y+1)
e = e + 2*dy - 2*dx;
++y;
}
++x;
setPixel(x, y);
}
glFlush();
}
Where does the e = 2*dy - dx come from? Why do we increase it by 2*dy or 2*dy - 2*dx?
Bresenham's algorithm uses only integer arithmetic. The key idea is to minimize the calculations for incremental evaluation of the line equation.
The algorithm is really simple. Let's start with the line equation
f(x) = y = a*x +b
(and assume 0 <= a < 1 for now).
When we go one pixel to the right, we get:
f(x+1) = a * (x+1) + b = f(x) + a
But both a and y will not be integers for the typical line.
So let's just introduce an "error". We always go to the right neighbor. In doing so, we make an error of a by not going up. If our error is above half a pixel (0.5), we go up (and hence decrease the error value by a pixel again)
float e=a;
float y=y1;
int x=x1;
while(x<=x2) {
SetPixel(x,y);
x++;
if (e > 0.5) {
y++;
e=e+a-1;
} else {
e=e+a;
}
}
(Note that we already set the error e to a initially and not to zero, because we always make the decision after the pixel is drawn, and we don't need to check the condition before drawing the very first pixel because that one is always exactly on the line.)
Now, we have come close. But there are two things which prevent us from using integers: the 0.5 and a which is dy/dx. But: we can scale the error value (and the condition) by an arbitray factor, without changing anything. Think about it: we've measured the error in pixels so far (because that seems intuitive at first), but this algorithm could use any arbitrary unit for the error value - half pixels, double pixels, pi pixels.
So let's just scale it by 2*dx to get rid of both fractions in the formula above! (In a way, they key trick here is that the "unit" in which we measure the error value is just not constant in the algorithm, but a function of the line).
int e=2*dy;
int y=y1;
int x=x1;
while(x<=x2) {
SetPixel(x,y);
x++;
if (e > dx) {
y++;
e=e+2*dy - 2*dx;
} else {
e=e+2*dy;
}
}
Now, we have what we want: only integers.
(One thing to note here, though: by going from float to int, we automatically "snap-in" the line's endpoints to integer coordinates - having integer endpoints is some precondition for (and limitation of) the Bresenham algorithm).
There is one additional trick: the condition contains a variable. It would be even more efficient, if we would test against a constant, and ideally against zero (since branching depending just on the sign or zero flags saves us a compare operation). And we can achive this, by just shifiting our error values. In the same way as before, not only the scale of the error value cane be chosen arbitrarily, but also origin.
Since we test for e > dx currently, shifting the error by -dx will allow us to test against 0 (and 0 now means what dx meant before, namely 0.5 pixels). This shift only affects the initial value of e, and the condition, all the increments stay the same as before:
int e=2*dy-dx;
int y=y1;
int x=x1;
while(x<=x2) {
SetPixel(x,y);
x++;
if (e > 0) {
y++;
e=e+2*dy - 2*dx;
} else {
e=e+2*dy;
}
}
Voila, the 2*dy-dx term has suddenly emerged... ;)
The term 2dy-dx comes after we fill xk =yk=0 in the formula (2dy•xk-2dx•yk+2dy+(2b-1)) because for the first parameter we assume the starting point of line lies at origin i.e (0,0).
And b is constant so it is ignored.
Try it by yourself.

Resources