What would set S = N \ {P} mean in pseudocode? - pseudocode

Learning about DBscan from Wikipedia. The article has the following pseudocode:
DBSCAN(DB, distFunc, eps, minPts) {
C = 0 /* Cluster counter */
for each point P in database DB {
if label(P) ≠ undefined then continue /* Previously processed in inner loop */
Neighbors N = RangeQuery(DB, distFunc, P, eps) /* Find neighbors */
if |N| < minPts then { /* Density check */
label(P) = Noise /* Label as Noise */
continue
}
C = C + 1 /* next cluster label */
label(P) = C /* Label initial point */
Seed set S = N \ {P} /* Neighbors to expand */
for each point Q in S { /* Process every seed point */
if label(Q) = Noise then label(Q) = C /* Change Noise to border point */
if label(Q) ≠ undefined then continue /* Previously processed */
label(Q) = C /* Label neighbor */
Neighbors N = RangeQuery(DB, distFunc, Q, eps) /* Find neighbors */
if |N| ≥ minPts then { /* Density check */
S = S ∪ N /* Add new neighbors to seed set */
}
}
}
}
I am pretty sure that |N| would mean the count of N.
What would the line:
Seed set S = N \ {P} /* Neighbors to expand */
mean? I think that S is a seed set like a list of objects. What does the N \ {P} mean?

\ is the complement operation, therefore N \ {P} is the set of neighbours N without point P. Meaning all points surrounding P with a certain distance, returned by RangeQuery(DB, distFunc, P, eps) (the query result includes P).
See https://en.wikipedia.org/wiki/Complement_(set_theory)

Related

How can I check wether a point is inside the circumcircle of 3 points?

Is there any easy solution? Or has anybody an example of an implementation?
Thanks, Jonas
Lets call
a, b, c our three points,
C the circumcircle of (a, b, c)
and d an other point.
A fast way to determine if d is in C is to compute this determinant:
| ax-dx, ay-dy, (ax-dx)² + (ay-dy)² |
det = | bx-dx, by-dy, (bx-dx)² + (by-dy)² |
| cx-dx, cy-dy, (cx-dx)² + (cy-dy)² |
if a, b, c are in counter clockwise order then:
if det equal 0 then d is on C
if det > 0 then d is inside C
if det < 0 then d is outside C
here is a javascript function that does just that:
function inCircle (ax, ay, bx, by, cx, cy, dx, dy) {
let ax_ = ax-dx;
let ay_ = ay-dy;
let bx_ = bx-dx;
let by_ = by-dy;
let cx_ = cx-dx;
let cy_ = cy-dy;
return (
(ax_*ax_ + ay_*ay_) * (bx_*cy_-cx_*by_) -
(bx_*bx_ + by_*by_) * (ax_*cy_-cx_*ay_) +
(cx_*cx_ + cy_*cy_) * (ax_*by_-bx_*ay_)
) > 0;
}
You might also need to check if your points are in counter clockwise order:
function ccw (ax, ay, bx, by, cx, cy) {
return (bx - ax)*(cy - ay)-(cx - ax)*(by - ay) > 0;
}
I didn't place the ccw check inside the inCircle function because you shouldn't check it every time.
This process doesn't take any divisions or square root operation.
You can see the code in action there: https://titouant.github.io/testTriangle/
And the source there: https://github.com/TitouanT/testTriangle
(In case you are interested in a non-obvious/"crazy" kind of solution.)
One equivalent property of Delaunay triangulation is as follows: if you build a circumcircle of any triangle in the triangulation, it is guaranteed not to contain any other vertices of the triangulation.
Another equivalent property of Delaunay triangulation is: it maximizes the minimal triangle angle (i.e. maximizes it among all triangulations on the same set of points).
This suggests an algorithm for your test:
Consider triangle ABC built on the original 3 points.
If the test point P lies inside the triangle it is definitely inside the circle
If the test point P belongs to one of the "corner" regions (see the shaded regions in the picture below), it is definitely outside the circle
Otherwise (let's say P lies in region 1) consider two triangulations of quadrilateral ABCP: the original one contains the original triangle (red diagonal), and the alternate one with "flipped" diagonal (blue diagonal)
Determine which one if this triangulations is a Delaunay triangulation by testing the "flip" condition, e.g. by comparing α = min(∠1,∠4,∠5,∠8) vs. β = min(∠2,∠3,∠6,∠7).
If the original triangulation is a Delaunay triangulation (α > β), P lies outside the circle. If the alternate triangulation is a Delaunay triangulation (α < β), P lies inside the circle.
Done.
(Revisiting this answer after a while.)
This solution might not be as "crazy" as it might appear at the first sight. Note that in order to compare angles at steps 5 and 6 there's no need to calculate the actual angle values. It is sufficient to know their cosines (i.e. there's no need to involve trigonometric functions).
A C++ version of the code
#include <cmath>
#include <array>
#include <algorithm>
struct pnt_t
{
int x, y;
pnt_t ccw90() const
{ return { -y, x }; }
double length() const
{ return std::hypot(x, y); }
pnt_t &operator -=(const pnt_t &rhs)
{
x -= rhs.x;
y -= rhs.y;
return *this;
}
friend pnt_t operator -(const pnt_t &lhs, const pnt_t &rhs)
{ return pnt_t(lhs) -= rhs; }
friend int operator *(const pnt_t &lhs, const pnt_t &rhs)
{ return lhs.x * rhs.x + lhs.y * rhs.y; }
};
int side(const pnt_t &a, const pnt_t &b, const pnt_t &p)
{
int cp = (b - a).ccw90() * (p - a);
return (cp > 0) - (cp < 0);
}
void make_ccw(std::array<pnt_t, 3> &t)
{
if (side(t[0], t[1], t[2]) < 0)
std::swap(t[0], t[1]);
}
double ncos(pnt_t a, const pnt_t &o, pnt_t b)
{
a -= o;
b -= o;
return -(a * b) / (a.length() * b.length());
}
bool inside_circle(std::array<pnt_t, 3> t, const pnt_t &p)
{
make_ccw(t);
std::array<int, 3> s =
{ side(t[0], t[1], p), side(t[1], t[2], p), side(t[2], t[0], p) };
unsigned outside = std::count(std::begin(s), std::end(s), -1);
if (outside != 1)
return outside == 0;
while (s[0] >= 0)
{
std::rotate(std::begin(t), std::begin(t) + 1, std::end(t));
std::rotate(std::begin(s), std::begin(s) + 1, std::end(s));
}
double
min_org = std::min({
ncos(t[0], t[1], t[2]), ncos(t[2], t[0], t[1]),
ncos(t[1], t[0], p), ncos(p, t[1], t[0]) }),
min_alt = std::min({
ncos(t[1], t[2], p), ncos(p, t[2], t[0]),
ncos(t[0], p, t[2]), ncos(t[2], p, t[1]) });
return min_org <= min_alt;
}
and a couple of tests with arbitrarily chosen triangles and a large number of random points
Of course, the whole thing can be easily reformulated without even mentioning Delaunay triangulations. Starting from step 4 this solution is based in the property of the opposite angles of cyclic quadrilateral, which must sum to 180°.
In this Math SE post of mine I included an equation which checks if four points are cocircular by computing a 4×4 determinant. By turning that equation into an inequality you can check for insideness.
If you want to know which direction the inequality has to go, conisder the case of a point very far away. In this case, the x²+y² term will dominate all other terms. So you can simply assume that for the point in question, this term is one while the three others are zero. Then pick the sign of your inequality so this value does not satisfy it, since this point is definitely outside but you want to characterize inside.
If numeric precision is an issue, this page by Prof. Shewchuk describes how to obtain consistent predicates for points expressed using regular double precision floating point numbers.
Given 3 points (x1,y1),(x2,y2),(x3,y3) and the point you want to check is inside the circle defined by the above 3 points (x,y) you can do something like
/**
*
* #param x coordinate of point want to check if inside
* #param y coordinate of point want to check if inside
* #param cx center x
* #param cy center y
* #param r radius of circle
* #return whether (x,y) is inside circle
*/
static boolean g(double x,double y,double cx,double cy,double r){
return Math.sqrt((x-cx)*(x-cx)+(y-cy)*(y-cy))<r;
}
// check if (x,y) is inside circle defined by (x1,y1),(x2,y2),(x3,y3)
static boolean isInside(double x,double y,double x1,double y1,double x2,double y2,double x3,double y3){
double m1 = (x1-x2)/(y2-y1);
double m2 = (x1-x3)/(y3-y1);
double b1 = ((y1+y2)/2) - m1*(x1+x2)/2;
double b2 = ((y1+y3)/2) - m2*(x1+x3)/2;
double xx = (b2-b1)/(m1-m2);
double yy = m1*xx + b1;
return g(x,y,xx,yy,Math.sqrt((xx-x1)*(xx-x1)+(yy-y1)*(yy-y1)));
}
public static void main(String[] args) {
// if (0,1) is inside the circle defined by (0,0),(0,2),(1,1)
System.out.println(isInside(0,1,0,0,0,2,1,1));
}
The method for getting an expression for the center of circle from 3 points goes from finding the intersection of the 2 perpendicular bisectors of 2 line segments, above I chose (x1,y1)-(x2,y2) and (x1,y1)-(x3,y3). Since you know a point on each perpendicular bisector, namely (x1+x2)/2 and (x1+x3)/2, and since you also know the slope of each perpendicular bisector, namely (x1-x2)/(y2-y1) and (x1-x3)/(y3-y1) from the above 2 line segments respectively, you can solve for the (x,y) where they intersect.

Efficient data structure for storing a long sequence of (mostly consecutive) integers

I'd like a data structure to efficiently store a long sequence of numbers. The numbers should always be whole integers, let's say Longs.
The feature of the inputs I'd like to leverage (to claim 'efficiency') is that the longs will be mostly consecutive. There can be missing values. And the values can be interacted with out of order.
I'd like the data structure to support the following operations:
addVal(n): add a single value, n
addRange(n, m): add all values between n and m, inclusive
delVal(n): remove a single value, n
delRange(n, m): remove all values between n and m, inclusive
containsVal(n): return whether a single value, n, exists in the structure
containsRange(n, m): return whether all values between n and m, incluse, exist in the structure
In essence this is a more specific kind of Set data structure that can leverage the continuity of the data to use less than O(n) memory, where n is the number of values stored.
To be clear, while I think an efficient implementation of such a data structure will require that we store intervals internally, that isn't visible or relevant to the user. There are some interval trees that store multiple intervals separately and allow operations to find the number of intervals that overlap with a given point or interval. But from the user perspective this should behave exactly like a set (except for the range-based operations so bulk additions and deletions can be handled efficiently).
Example:
dataStructure = ...
dataStructure.addRange(1,100) // [(1, 100)]
dataStructure.addRange(200,300) // [(1, 100), (200, 300)]
dataStructure.addVal(199) // [(1, 100), (199, 300)]
dataStructure.delRange(50,250) // [(1, 49), (251, 300)]
My assumption is this would best be implemented by some tree-based structure but I don't have a great impression about how to do that yet.
I wanted to learn if there was some commonly used data structure that already satisfies this use case, as I'd rather not reinvent the wheel. If not, I'd like to hear how you think this could best be implemented.
If you don't care about duplicates, then your intervals are non-overlapping. You need to create a structure that maintains that invariant. If you need a query like numIntervalsContaining(n) then that is a different problem.
You could use a BST that stores pairs of endpoints, as in a C++ std::set<std::pair<long,long>>. The interpretation is that each entry corresponds to the interval [n,m]. You need a weak ordering - it is the usual integer ordering on the left endpoint. A single int or long n is inserted as [n,n]. We have to maintain the property that all node intervals are non-overlapping. A brief evaluation of the order of your operations is as follows. Since you've already designated n I use N for the size of the tree.
addVal(n): add a single value, n : O(log N), same as a std::set<int>. Since the intervals are non-overlapping, you need to find the predecessor of n, which can be done in O(log n) time (break it down by cases as in https://www.quora.com/How-can-you-find-successors-and-predecessors-in-a-binary-search-tree-in-order). Look at the right endpoint of this predecessor, and extend the interval or add an additional node [n,n] if necessary, which by left-endpoint ordering will always be a right child. Note that if the interval is extended (inserting [n+1,n+1] into a tree with node [a,n] forming the node [a,n+1]) it may now bump into the next left endpoint, requiring another merge. So there are a few edge cases to consider. A little more complicated than a simple BST, but still O(log N).
addRange(n, m): O(log N), process is similar. If the interval inserted intersects nontrivially with another, merge the intervals so that the non-overlapping property is maintained. The worst case performance is O(n) as pointed out below as there is no upper limit on the number of subintervals consumed by the one we are inserting.
delVal(n): O(log N), again O(n) worst case as we don't know how many intervals are contained in the interval we are deleting.
delRange(n, m): remove all values between n and m, inclusive : O(log N)
containsVal(n): return whether a single value, n, exists in the structure : O(log N)
containsRange(n, m): return whether all values between n and m, inclusive, exist in the structure : O(log N)
Note that we can maintain the non-overlapping property with correct add() and addRange() methods, it is already maintained by the delete methods. We need O(n) storage at the worst.
Note that all operations are O(log N), and inserting the range [n,m] is nothing like O(m-n) or O(log(m-n)) or anything like that.
I assume you don't care about duplicates, just membership. Otherwise you may need an interval tree or KD-tree or something, but those are more relevant for float data...
Another alternative might be a rope data structure ( https://en.m.wikipedia.org/wiki/Rope_(data_structure) ), which seems to support the operations you are asking for, implemented in O(log n) time. As opposed to the example in Wikipedia, yours would store [start,end] rather than string subsequences.
What's interesting about the rope is its efficient lookup of index-within-interval. It accomplishes this by ordering all value positions from left to right - a lower to higher positioning (of which your intervals would be a straightforward representation) can be either upwards or downwards as long as the movement is to the right - as well as relying on storing subtree size, which orients current position based on the weight on the left. Engulfing partial intervals by larger encompassing intervals could be accomplished in O(log n) time by updating and unlinking relevant tree segments.
The problem with storing each interval as a (start,end) couple is that if you add a new range that encompasses N previously stored intervals, you have to destroy each of these intervals, which takes N steps, whether the intervals are stored in a tree, a rope or a linked list.
(You could leave them for automatic garbage collection, but that will take time too, and only works in some languages.)
A possible solution for this could be to store the values (not the start and end point of intervals) in an N-ary tree, where each node represents a range, and stores two N-bit maps, representing N sub-ranges and whether the values in those sub-ranges are all present, all absent, or mixed. In the case of mixed, there would be a pointer to a child node which represents this rub-range.
Example: (using a tree with branching factor 8 and height 2)
full range: 0-511 ; store interval 100-300
0-511:
0- 63 64-127 128-191 192-255 256-319 320-383 384-447 448-511
0 mixed 1 1 mixed 0 0 0
64-127:
64- 71 72- 79 80- 87 88- 95 96-103 104-111 112-119 120-127
0 0 0 0 mixed 1 1 1
96-103:
96 97 98 99 100 101 102 103
0 0 0 0 1 1 1 1
256-319:
256-263 264-271 272-279 280-287 288-295 296-303 304-311 312-319
1 1 1 1 1 mixed 0 0
296-303:
296 297 298 299 300 301 302 303
1 1 1 1 1 0 0 0
So the tree would contain these five nodes:
- values: 00110000, mixed: 01001000, 2 pointers to sub-nodes
- values: 00000111, mixed: 00001000, 1 pointer to sub-node
- values: 00001111, mixed: 00000000
- values: 11111000, mixed: 00000100, 1 pointer to sub-node
- values: 11111000, mixed: 00000000
The point of storing the interval this way is that you can discard an interval without having to actually delete it. Let's say you add a new range 200-400; in that case, you'd change the range 256-319 in the root node from "mixed" to "1", without deleting or updating the 256-319 and 296-303 nodes themselves; these nodes can be kept for later re-use (or disconnected and put in a queue of re-usable sub-trees, or deleted in a programmed garbage-collection when the programme is idling or running low on memory).
When looking up an interval, you only have to go as deep down the tree as necessary; when looking up e.g. 225-275, you'd find that 192-255 is all-present, 256-319 is mixed, 256-263 and 264-271 and 272-279 are all-present, and you'd know the answer is true. Since these values would be stored as bitmaps (one for present/absent, one for mixed/solid), all the values in a node could be checked with only a few bitwise comparisons.
Re-using nodes:
If a node has a child node, and the corresponding value is later set from mixed to all-absent or all-present, the child node no longer holds relevant values (but it is being ignored). When the value is changed back to mixed, the child node can be updated by setting all its values to its value in the parent node (before it was changed to mixed) and then making the necessary changes.
In the example above, if we add the range 0-200, this will change the tree to:
- values: 11110000, mixed: 00001000, 2 pointers to sub-nodes
- (values: 00000111, mixed: 00001000, 1 pointer to sub-node)
- (values: 00001111, mixed: 00000000)
- values: 11111000, mixed: 00000100, 1 pointer to sub-node
- values: 11111000, mixed: 00000000
The second and third node now contain outdated values, and are being ignored. If we then delete the range 80-95, the value for range 64-127 in the root node is changed to mixed again, and the node for range 64-127 is re-used. First we set all values in it to all-present (because that was the previous value of the parent node), and then we set the values for 80-87 and 88-95 to all-absent. The third node, for range 96-103 remains out-of-use.
- values: 00110000, mixed: 01001000, 2 pointers to sub-nodes
- values: 11001111, mixed: 00000000, 1 pointer to sub-node
- (values: 00001111, mixed: 00000000)
- values: 11111000, mixed: 00000100, 1 pointer to sub-node
- values: 11111000, mixed: 00000000
If we then added value 100, the value for range 96-103 in the second node would be changed to mixed again, and the third node would be updated to all-absent (its previous value in the second node) and then value 100 would be set to present:
- values: 00110000, mixed: 01001000, 2 pointers to sub-nodes
- values: 11000111, mixed: 00001000, 1 pointer to sub-node
- values: 00001000, mixed: 00000000
- values: 11111000, mixed: 00000100, 1 pointer to sub-node
- values: 11111000, mixed: 00000000
At first it may seem that this data structure uses a lot of storage space compared to solutions which store the intervals as (start,end) pairs. However, let's look at the (theoretical) worst-case scenario, where every even number is present and every odd number is absent, across the whole 64-bit range:
Total range: 0 ~ 18,446,744,073,709,551,615
Intervals: 9,223,372,036,854,775,808
A data structure which stores these as (start,end) pairs would use:
Nodes: 9,223,372,036,854,775,808
Size of node: 16 bytes
TOTAL: 147,573,952,589,676,412,928 bytes
If the data structure uses nodes which are linked via (64-bit) pointers, that would add:
Data: 147,573,952,589,676,412,928 bytes
Pointers: 73,786,976,294,838,206,456 bytes
TOTAL: 221,360,928,884,514,619,384 bytes
An N-ary tree with branching factor 64 (and 16 for the last level, to get a total range of 10×6 + 1×4 = 64 bits) would use:
Nodes (branch): 285,942,833,483,841
Size of branch: 528 bytes
Nodes (leaf): 18,014,398,509,481,984
Size of leaf: 144 bytes
TOTAL: 2,745,051,201,444,873,744 bytes
which is 53.76 times less than (start,end) pair structures (or 80.64 times less including pointers).
The calculation was done with the following N-ary tree:
Branch (9 levels):
value: 64-bit map
mixed: 64-bit map
sub-nodes: 64 pointers
TOTAL: 528 bytes
Leaf:
value: 64-bit map
mixed: 64-bit map
sub-nodes: 64 16-bit maps (more efficient than pointing to sub-node)
TOTAL: 144 bytes
This is of course a worst-case comparison; the average case would depend very much on the specific input.
Here's a first code example I wrote to test the idea. The nodes have branching factor 16, so that every level stores 4 bits of the integers, and common bit depths can be obtained without different leaves and branches. As an example, a tree of depth 3 is created, representing a range of 4×4 = 16 bits.
function setBits(pattern, value, mask) { // helper function (make inline)
return (pattern & ~mask) | (value ? mask : 0);
}
function Node(value) { // CONSTRUCTOR
this.value = value ? 65535 : 0; // set all values to 1 or 0
this.mixed = 0; // set all to non-mixed
this.sub = null; // no pointer array yet
}
Node.prototype.prepareSub = function(pos, mask, value) {
if ((this.mixed & mask) == 0) { // not mixed, child possibly outdated
var prev = (this.value & mask) >> pos;
if (value == prev) return false; // child doesn't require setting
if (!this.sub) this.sub = []; // create array of pointers
if (this.sub[pos]) {
this.sub[pos].value = prev ? 65535 : 0; // update child node values
this.sub[pos].mixed = 0;
}
else this.sub[pos] = new Node(prev); // create new child node
}
return true; // child requires setting
}
Node.prototype.set = function(value, a, b, step) {
var posA = Math.floor(a / step), posB = Math.floor(b / step);
var maskA = 1 << posA, maskB = 1 << posB;
a %= step; b %= step;
if (step == 1) { // node is leaf
var vMask = (maskB | (maskB - 1)) ^ (maskA - 1); // bits posA to posB inclusive
this.value = setBits(this.value, value, vMask);
}
else if (posA == posB) { // only 1 sub-range to be set
if (a == 0 && b == step - 1) // a-b is full sub-range
{
this.value = setBits(this.value, value, maskA);
this.mixed = setBits(this.mixed, 0, maskA);
}
else if (this.prepareSub(posA, maskA, value)) { // child node requires setting
var solid = this.sub[posA].set(value, a, b, step >> 4); // recurse
this.value = setBits(this.value, solid ? value : 0, maskA); // set value
this.mixed = setBits(this.mixed, solid ? 0 : 1, maskA); // set mixed
}
}
else { // multiple sub-ranges to set
var vMask = (maskB - 1) ^ (maskA | (maskA - 1)); // bits between posA and posB
this.value = setBits(this.value, value, vMask); // set inbetween values
this.mixed &= ~vMask; // set inbetween to solid
var solidA = true, solidB = true;
if (a != 0 && this.prepareSub(posA, maskA, value)) { // child needs setting
solidA = this.sub[posA].set(value, a, step - 1, step >> 4);
}
if (b != step - 1 && this.prepareSub(posB, maskB, value)) { // child needs setting
solidB = this.sub[posB].set(value, 0, b, step >> 4);
}
this.value = setBits(this.value, solidA ? value : 0, maskA); // set value
this.value = setBits(this.value, solidB ? value : 0, maskB);
if (solidA) this.mixed &= ~maskA; else this.mixed |= maskA; // set mixed
if (solidB) this.mixed &= ~maskB; else this.mixed |= maskB;
}
return this.mixed == 0 && this.value == 0 || this.value == 65535; // solid or mixed
}
Node.prototype.check = function(a, b, step) {
var posA = Math.floor(a / step), posB = Math.floor(b / step);
var maskA = 1 << posA, maskB = 1 << posB;
a %= step; b %= step;
var vMask = (maskB - 1) ^ (maskA | (maskA - 1)); // bits between posA and posB
if (step == 1) {
vMask = posA == posB ? maskA : vMask | maskA | maskB;
return (this.value & vMask) == vMask;
}
if (posA == posB) {
var present = (this.mixed & maskA) ? this.sub[posA].check(a, b, step >> 4) : this.value & maskA;
return !!present;
}
var present = (this.mixed & maskA) ? this.sub[posA].check(a, step - 1, step >> 4) : this.value & maskA;
if (!present) return false;
present = (this.mixed & maskB) ? this.sub[posB].check(0, b, step >> 4) : this.value & maskB;
if (!present) return false;
return (this.value & vMask) == vMask;
}
function NaryTree(factor, depth) { // CONSTRUCTOR
this.root = new Node();
this.step = Math.pow(factor, depth);
}
NaryTree.prototype.addRange = function(a, b) {
this.root.set(1, a, b, this.step);
}
NaryTree.prototype.delRange = function(a, b) {
this.root.set(0, a, b, this.step);
}
NaryTree.prototype.hasRange = function(a, b) {
return this.root.check(a, b, this.step);
}
var intervals = new NaryTree(16, 3); // create tree for 16-bit range
// CREATE RANDOM DATA AND RUN TEST
document.write("Created N-ary tree for 16-bit range.<br>Randomly adding/deleting 100000 intervals...");
for (var test = 0; test < 100000; test++) {
var a = Math.floor(Math.random() * 61440);
var b = a + Math.floor(Math.random() * 4096);
if (Math.random() > 0.5) intervals.addRange(a,b);
else intervals.delRange(a,b);
}
document.write("<br>Checking a few random intervals:<br>");
for (var test = 0; test < 8; test++) {
var a = Math.floor(Math.random() * 65280);
var b = a + Math.floor(Math.random() * 256);
document.write("Tree has interval " + a + "-" + b + " ? " + intervals.hasRange(a,b),".<br>");
}
I ran a test to check how many nodes are being created, and how many of these are active or dormant. I used a total range of 24-bit (so that I could test the worst-case without running out of memory), divided into 6 levels of 4 bits (so each node has 16 sub-ranges); the number of nodes that need to be checked or updated when adding, deleting or checking an interval is 11 or less. The maximum number of nodes in this scheme is 1,118,481.
The graph below shows the number of active nodes when you keep adding/deleting random intervals with range 1 (single integers), 1~16, 1~256 ... 1~16M (the full range).
Adding and deleting single integers (dark green line) creates active nodes up to close to the maximum 1,118,481 nodes, with almost no nodes being made dormant. The maximum is reached after adding and deleting around 16M integers (= the number of integers in the range).
If you add and delete random intervals in a larger range, the number of nodes that are created is roughly the same, but more of them are being made dormant. If you add random intervals in the full 1~16M range (bright yellow line), less than 64 nodes are active at any time, no matter how many intervals you keep adding or deleting.
This already gives an idea of where this data structure could be useful as opposed to others: the more nodes are being made dormant, the more intervals/nodes would need to be deleted in other schemes.
On the other hand, it shows how this data structure may be too space-inefficient for certain ranges, and types and amounts of input. You could introduce a dormant node recycling system, but that takes away the advantage of the dormant nodes being immediately reusable.
A lot of space in the N-ary tree is taken up by pointers to child nodes. If the complete range is small enough, you could store the tree in an array. For a 32-bit range that would take 580 MB (546 MB for the "value" bitmaps and 34 MB for the "mixed" bitmaps). This is more space-efficient because you only store the bitmaps, and you don't need pointers to child nodes, because everything has a fixed place in the array. You'd have the advantage of a tree with depth 7, so any operation could be done by checking 15 "nodes" or fewer, and no nodes need to be created or deleted during add/delete/check operations.
Here's a code example I used to try out the N-ary-tree-in-an-array idea; it uses 580MB to store a N-ary tree with branching factor 16 and depth 7, for a 32-bit range (unfortunately, a range above 40 bits or so is probably beyond the memory capabilities of any normal computer). In addition to the requested functions, it can also check whether an interval is completely absent, using notValue() and notRange().
#include <iostream>
inline uint16_t setBits(uint16_t pattern, uint16_t mask, uint16_t value) {
return (pattern & ~mask) | (value & mask);
}
class NaryTree32 {
uint16_t value[0x11111111], mixed[0x01111111];
bool set(uint32_t a, uint32_t b, uint16_t value = 0xFFFF, uint8_t bits = 28, uint32_t offset = 0) {
uint8_t posA = a >> bits, posB = b >> bits;
uint16_t maskA = 1 << posA, maskB = 1 << posB;
uint16_t mask = maskB ^ (maskA - 1) ^ (maskB - 1);
// IF NODE IS LEAF: SET VALUE BITS AND RETURN WHETHER VALUES ARE MIXED
if (bits == 0) {
this->value[offset] = setBits(this->value[offset], mask, value);
return this->value[offset] != 0 && this->value[offset] != 0xFFFF;
}
uint32_t offsetA = offset * 16 + posA + 1, offsetB = offset * 16 + posB + 1;
uint32_t subMask = ~(0xFFFFFFFF << bits);
a &= subMask; b &= subMask;
// IF SUB-RANGE A IS MIXED OR HAS WRONG VALUE
if (((this->mixed[offset] & maskA) != 0 || (this->value[offset] & maskA) != (value & maskA))
&& (a != 0 || posA == posB && b != subMask)) {
// IF SUB-RANGE WAS PREVIOUSLY SOLID: UPDATE TO PREVIOUS VALUE
if ((this->mixed[offset] & maskA) == 0) {
this->value[offsetA] = (this->value[offset] & maskA) ? 0xFFFF : 0x0000;
if (bits != 4) this->mixed[offsetA] = 0x0000;
}
// RECURSE AND IF SUB-NODE IS MIXED: SET MIXED BIT AND REMOVE A FROM MASK
if (this->set(a, posA == posB ? b : subMask, value, bits - 4, offsetA)) {
this->mixed[offset] |= maskA;
mask ^= maskA;
}
}
// IF SUB-RANGE B IS MIXED OR HAS WRONG VALUE
if (((this->mixed[offset] & maskB) != 0 || (this->value[offset] & maskB) != (value & maskB))
&& b != subMask && posA != posB) {
// IF SUB-RANGE WAS PREVIOUSLY SOLID: UPDATE SUB-NODE TO PREVIOUS VALUE
if ((this->mixed[offset] & maskB) == 0) {
this->value[offsetB] = (this->value[offset] & maskB) ? 0xFFFF : 0x0000;
if (bits > 4) this->mixed[offsetB] = 0x0000;
}
// RECURSE AND IF SUB-NODE IS MIXED: SET MIXED BIT AND REMOVE A FROM MASK
if (this->set(0, b, value, bits - 4, offsetB)) {
this->mixed[offset] |= maskB;
mask ^= maskB;
}
}
// SET VALUE AND MIXED BITS THAT HAVEN'T BEEN SET YET AND RETURN WHETHER NODE IS MIXED
if (mask) {
this->value[offset] = setBits(this->value[offset], mask, value);
this->mixed[offset] &= ~mask;
}
return this->mixed[offset] != 0 || this->value[offset] != 0 && this->value[offset] != 0xFFFF;
}
bool check(uint32_t a, uint32_t b, uint16_t value = 0xFFFF, uint8_t bits = 28, uint32_t offset = 0) {
uint8_t posA = a >> bits, posB = b >> bits;
uint16_t maskA = 1 << posA, maskB = 1 << posB;
uint16_t mask = maskB ^ (maskA - 1) ^ (maskB - 1);
// IF NODE IS LEAF: CHECK BITS A TO B INCLUSIVE AND RETURN
if (bits == 0) {
return (this->value[offset] & mask) == (value & mask);
}
uint32_t subMask = ~(0xFFFFFFFF << bits);
a &= subMask; b &= subMask;
// IF SUB-RANGE A IS MIXED AND PART OF IT NEEDS CHECKING: RECURSE AND RETURN IF FALSE
if ((this->mixed[offset] & maskA) && (a != 0 || posA == posB && b != subMask)) {
if (this->check(a, posA == posB ? b : subMask, value, bits - 4, offset * 16 + posA + 1)) {
mask ^= maskA;
}
else return false;
}
// IF SUB-RANGE B IS MIXED AND PART OF IT NEEDS CHECKING: RECURSE AND RETURN IF FALSE
if (posA != posB && (this->mixed[offset] & maskB) && b != subMask) {
if (this->check(0, b, value, bits - 4, offset * 16 + posB + 1)) {
mask ^= maskB;
}
else return false;
}
// CHECK INBETWEEN BITS (AND A AND/OR B IF NOT YET CHECKED) WHETHER SOLID AND CORRECT
return (this->mixed[offset] & mask) == 0 && (this->value[offset] & mask) == (value & mask);
}
public:
NaryTree32() { // CONSTRUCTOR: INITIALISES ROOT NODE
this->value[0] = 0x0000;
this->mixed[0] = 0x0000;
}
void addValue(uint32_t a) {this->set(a, a);}
void addRange(uint32_t a, uint32_t b) {this->set(a, b);}
void delValue(uint32_t a) {this->set(a, a, 0);}
void delRange(uint32_t a, uint32_t b) {this->set(a, b, 0);}
bool hasValue(uint32_t a) {return this->check(a, a);}
bool hasRange(uint32_t a, uint32_t b) {return this->check(a, b);}
bool notValue(uint32_t a) {return this->check(a, a, 0);}
bool notRange(uint32_t a, uint32_t b) {return this->check(a, b, 0);}
};
int main() {
NaryTree32 *tree = new NaryTree32();
tree->addRange(4294967280, 4294967295);
std::cout << tree->hasRange(4294967280, 4294967295) << "\n";
tree->delValue(4294967290);
std::cout << tree->hasRange(4294967280, 4294967295) << "\n";
tree->addRange(1000000000, 4294967295);
std::cout << tree->hasRange(4294967280, 4294967295) << "\n";
tree->delRange(2000000000, 4294967280);
std::cout << tree->hasRange(4294967280, 4294967295) << "\n";
return 0;
}
Interval trees seem to be geared toward storing overlapping intervals, while in your case that doesn't make sense. An interval tree could hold millions of small overlapping intervals, which together form only a handful of longer non-overlapping intervals.
If you want to store only non-overlapping intervals, then adding or deleting an interval may involve deleting a number of consecutive intervals that fall within the new interval. So quickly finding consecutive intervals, and efficient deletion of a potentially large number of intervals are important.
That sounds like a job for the humble linked list. When inserting a new interval, you'd:
Search the position of the new interval's starting point.
If it is inside an existing interval, go on to find the position of the end point, while extending the existing interval and deleting all intervals you pass on the way.
If it is inbetween existing intervals, check if the end point comes before the next existing interval. If it does, create a new interval. If the end point comes after the start of the next existing interval, change the starting point of the next interval, and then go on to find the end point as explained in the previous paragraph.
Deleting an interval would be largely the same: you truncate the intervals that the starting point and end point are inside of, and delete all the intervals inbetween.
The average and worst-case complexity of this are N/2 and N, where N is the number of intervals in the linked list. You could improve this by adding a method to avoid having to iterate over the whole list to find the starting point; if you know the range and distribution of the values, this could be something like a hash table; e.g. if the values are from 1 to X and the distribution is uniform, you'd store a table of length Y, where each item points to the interval that starts before the value X/Y. When adding an interval (A,B), you'd look up table[A/Y] and start iterating over the linked list from there. The choice of value for Y would be determined by how much space you want to use, versus how close you want to get to the actual position of the starting point. The complexities would then drop by a factor Y.
(If you work in a language where you can short-circuit a linked list, and just leave the chain of objects you cut loose to be garbage-collected, you could find the location of the starting point and end point independently, connect them, and skip the deletion of all the intervals inbetween. I don't know whether this would actually increase speed in practice.)
Here's a start of a code example, with the three range functions, but without further optimisation:
function Interval(a, b, n) {
this.start = a;
this.end = b;
this.next = n;
}
function IntervalList() {
this.first = null;
}
IntervalList.prototype.addRange = function(a, b) {
if (!this.first || b < this.first.start - 1) {
this.first = new Interval(a, b, this.first); // insert as first element
return;
}
var i = this.first;
while (a > i.end + 1 && i.next && b >= i.next.start - 1) {
i = i.next; // locate starting point
}
if (a > i.end + 1) { // insert as new element
i.next = new Interval(a, b, i.next);
return;
}
var j = i.next;
while (j && b >= j.start - 1) { // locate end point
i.end = j.end;
i.next = j = j.next; // discard overlapping interval
}
if (a < i.start) i.start = a; // update interval start
if (b > i.end) i.end = b; // update interval end
}
IntervalList.prototype.delRange = function(a, b) {
if (!this.first || b < this.first.start) return; // range before first interval
var i = this.first;
while (i.next && a > i.next.start) i = i.next; // a in or after interval i
if (a > i.start) { // a in interval
if (b < i.end) { // range in interval -> split
i.next = new Interval(b + 1, i.end, i.next);
i.end = a - 1;
return;
}
if (a <= i.end) i.end = a - 1; // truncate interval
}
var j = a > i.start ? i.next : i;
while (j && b >= j.end) j = j.next; // b before or in interval j
if (a <= this.first.start) this.first = j; // short-circuit list
else i.next = j;
if (j && b >= j.start) j.start = b + 1; // truncate interval
}
IntervalList.prototype.hasRange = function(a, b) {
if (!this.first) return false; // empty list
var i = this.first;
while (i.next && a > i.end) i = i.next; // a before or in interval i
return a >= i.start && b <= i.end; // range in interval ?
}
IntervalList.prototype.addValue = function(a) {
this.addRange(a, a); // could be optimised
}
IntervalList.prototype.delValue = function(a) {
this.delRange(a, a); // could be optimised
}
IntervalList.prototype.hasValue = function(a) {
return this.hasRange(a, a); // could be optimised
}
IntervalList.prototype.print = function() {
var i = this.first;
if (i) do document.write("(" + i.start + "-" + i.end + ") "); while (i = i.next);
document.write("<br>");
}
var intervals = new IntervalList();
intervals.addRange(100,199);
document.write("+ (100-199) → "); intervals.print();
intervals.addRange(300,399);
document.write("+ (300-399) → "); intervals.print();
intervals.addRange(200,299);
document.write("+ (200-299) → "); intervals.print();
intervals.delRange(225,275);
document.write("− (225-275) → "); intervals.print();
document.write("(150-200) ? " + intervals.hasRange(150,200) + "<br>");
document.write("(200-300) ? " + intervals.hasRange(200,300) + "<br>");
I'm surprised no one has suggested segment trees over the integer domain of stored values. (When used in geometric applications like graphics in 2d and 3d, they're called quadtrees and octrees resp.) Insert, delete, and lookup will have time and space complexity proportional to the number of bits in (maxval - minval), that is log_2 (maxval - minval), the max and min values of the integer data domain.
In a nutshell, we are encoding a set of integers in [minval, maxval]. A node at topmost level 0 represents that entire range. Each successive level's nodes represent sub-ranges of approximate size (maxval - minval) / 2^k. When a node is included, some subset of it's corresponding values are part of the represented set. When it's a leaf, all of its values are in the set. When it's absent, none are.
E.g. if minval=0 and maxval=7, then the k=1 children of the k=0 node represent [0..3] and [4..7]. Their children at level k=2 are [0..1][2..3][4..5], and [6..7], and the k=3 nodes represent individual elements. The set {[1..3], [6..7]} would be the tree (levels left to right):
[0..7] -- [0..3] -- [0..1]
| | `-[1]
| `- [2..3]
` [4..7]
`- [6..7]
It's not hard to see that space for the tree will be O(m log (maxval - minval)) where m is the number of intervals stored in the tree.
It's not common to use segment trees with dynamic insert and delete, but the algorithms turn out to be fairly simple. It takes some care to ensure the number of nodes is minimized.
Here is some very lightly tested java code.
import java.util.ArrayList;
import java.util.Iterator;
import java.util.List;
public class SegmentTree {
// Shouldn't differ by more than Long.MAX_VALUE to prevent overflow.
static final long MIN_VAL = 0;
static final long MAX_VAL = Long.MAX_VALUE;
Node root;
static class Node {
Node left;
Node right;
Node(Node left, Node right) {
this.left = left;
this.right = right;
}
}
private static boolean isLeaf(Node node) {
return node != null && node.left == null && node.right == null;
}
private static Node reset(Node node, Node left, Node right) {
if (node == null) {
return new Node(left, right);
}
node.left = left;
node.right = right;
return node;
}
/**
* Accept an arbitrary subtree rooted at a node representing a subset S of the range [lo,hi] and
* transform it into a subtree representing S + [a,b]. It's assumed a >= lo and b <= hi.
*/
private static Node add(Node node, long lo, long hi, long a, long b) {
// If the range is empty, the interval tree is always null.
if (lo > hi) return null;
// If this is a leaf or insertion is outside the range, there's no change.
if (isLeaf(node) || a > b || b < lo || a > hi) return node;
// If insertion fills the range, return a leaf.
if (a == lo && b == hi) return reset(node, null, null);
// Insertion doesn't cover the range. Get the children, if any.
Node left = null, right = null;
if (node != null) {
left = node.left;
right = node.right;
}
// Split the range and recur to insert in halves.
long mid = lo + (hi - lo) / 2;
left = add(left, lo, mid, a, Math.min(b, mid));
right = add(right, mid + 1, hi, Math.max(a, mid + 1), b);
// Build a new node, coallescing to leaf if both children are leaves.
return isLeaf(left) && isLeaf(right) ? reset(node, null, null) : reset(node, left, right);
}
/**
* Accept an arbitrary subtree rooted at a node representing a subset S of the range [lo,hi] and
* transform it into a subtree representing range(s) S - [a,b]. It's assumed a >= lo and b <= hi.
*/
private static Node del(Node node, long lo, long hi, long a, long b) {
// If the tree is null, we can't remove anything, so it's still null
// or if the range is empty, the tree is null.
if (node == null || lo > hi) return null;
// If the deletion is outside the range, there's no change.
if (a > b || b < lo || a > hi) return node;
// If deletion fills the range, return an empty tree.
if (a == lo && b == hi) return null;
// Deletion doesn't fill the range.
// Replace a leaf with a tree that has the deleted portion removed.
if (isLeaf(node)) {
return add(add(null, lo, hi, b + 1, hi), lo, hi, lo, a - 1);
}
// Not a leaf. Get children, if any.
Node left = node.left, right = node.right;
long mid = lo + (hi - lo) / 2;
// Recur to delete in child ranges.
left = del(left, lo, mid, a, Math.min(b, mid));
right = del(right, mid + 1, hi, Math.max(a, mid + 1), b);
// Build a new node, coallescing to empty tree if both children are empty.
return left == null && right == null ? null : reset(node, left, right);
}
private static class NotContainedException extends Exception {};
private static void verifyContains(Node node, long lo, long hi, long a, long b)
throws NotContainedException {
// If this is a leaf or query is empty, it's always contained.
if (isLeaf(node) || a > b) return;
// If tree or search range is empty, the query is never contained.
if (node == null || lo > hi) throw new NotContainedException();
long mid = lo + (hi - lo) / 2;
verifyContains(node.left, lo, mid, a, Math.min(b, mid));
verifyContains(node.right, mid + 1, hi, Math.max(a, mid + 1), b);
}
SegmentTree addRange(long a, long b) {
root = add(root, MIN_VAL, MAX_VAL, Math.max(a, MIN_VAL), Math.min(b, MAX_VAL));
return this;
}
SegmentTree addVal(long a) {
return addRange(a, a);
}
SegmentTree delRange(long a, long b) {
root = del(root, MIN_VAL, MAX_VAL, Math.max(a, MIN_VAL), Math.min(b, MAX_VAL));
return this;
}
SegmentTree delVal(long a) {
return delRange(a, a);
}
boolean containsVal(long a) {
return containsRange(a, a);
}
boolean containsRange(long a, long b) {
try {
verifyContains(root, MIN_VAL, MAX_VAL, Math.max(a, MIN_VAL), Math.min(b, MAX_VAL));
return true;
} catch (NotContainedException expected) {
return false;
}
}
private static final boolean PRINT_SEGS_COALESCED = true;
/** Gather a list of possibly coalesced segments for printing. */
private static void gatherSegs(List<Long> segs, Node node, long lo, long hi) {
if (node == null) {
return;
}
if (node.left == null && node.right == null) {
if (PRINT_SEGS_COALESCED && !segs.isEmpty() && segs.get(segs.size() - 1) == lo - 1) {
segs.remove(segs.size() - 1);
} else {
segs.add(lo);
}
segs.add(hi);
} else {
long mid = lo + (hi - lo) / 2;
gatherSegs(segs, node.left, lo, mid);
gatherSegs(segs, node.right, mid + 1, hi);
}
}
SegmentTree print() {
List<Long> segs = new ArrayList<>();
gatherSegs(segs, root, MIN_VAL, MAX_VAL);
Iterator<Long> it = segs.iterator();
while (it.hasNext()) {
long a = it.next();
long b = it.next();
System.out.print("[" + a + "," + b + "]");
}
System.out.println();
return this;
}
public static void main(String [] args) {
SegmentTree tree = new SegmentTree()
.addRange(0, 4).print()
.addRange(6, 7).print()
.delVal(2).print()
.addVal(5).print()
.addRange(0,1000).print()
.addVal(5).print()
.delRange(22, 777).print();
System.out.println(tree.containsRange(3, 20));
}
}

Are there any numerically stable versions of the centroid finding algorithm for polygons?

Say I have an almost degenerate 2-D polygon such as:
[[40.802,9.289],[40.875,9.394],[40.910000000000004,9.445],[40.911,9.446],[40.802,9.289]]
For reference this looks like:
If I use the standard centroid algorithm as shown on Wikipedia, for example this python code:
pts = [[40.802,9.289],[40.875,9.394],[40.910000000000004,9.445], [40.911,9.446],[40.802,9.289]]
a = 0.0
c = [0.0, 0.0]
for i in range(0,4):
k = pts[i][0] * pts[i + 1][1] - pts[i + 1][0] * pts[i][1]
a += k
c = [c[0] + k * (pts[i][0] + pts[i + 1][0]), c[1] + k * (pts[i][1] + pts[i + 1][1])]
c = [c[0] / (3 * a), c[1] / (3 * a)]
I get c = [-10133071.666666666, -14636692.583333334]. In other cases where a == 0.0 I might also get a divide by zero.
What I would ideally like is that in the worst case, the centroid is equal to one of the vertices or somewhere within the polygon, and that no arbitrary tolerances should be used for avoiding this situation. Is there some clever way to rewrite the equation to make it more numerically stable?
When the area is zero (or very close to zero, if you cannot afford to do exact arithmetic), probably the best option is to take perimeter centroid of the set of points.
Perimeter centroid is given by the ratio of the weighted sum of midpoints of each side of the polygon (the weight is the length of the corresponding side), to the perimeter of the polygon.
Using exact arithmetic, it is possible to calculate the centroid in this case.
red point is the perimeter centroid and the green one is the true centroid
I used sage to calculate the centroid exactly https://cloud.sagemath.com/projects/f3149cab-2b4b-494a-b795-06d62ae133dd/files/2016-08-17-102024.sagews.
People have been looking for a way to relate these points with respect to each other -- https://math.stackexchange.com/questions/1173903/centroids-of-a-polygon.
I don't think this formula can be easily made more stable towards nearly degenerated 2D polygons. The problem is that the calculation of the area (A) relies on subtracting trapezoidal shapes (see Paul Bourke). For very small areas you inevitably run into the numerical precision.
I see two possible solutions:
1.) You could check the area and if it gets below a threshold assume the polygon is degenerated and just take the mean of minimal and maximal x and y values (the middle of the line)
2.). Use floating point arithmetics with higher precision, maybe something like mpmath.
Btw. you have a mistake in your code. It should be:
c = [c[0] + k * (pts[i][0] + pts[i + 1][0]), c[1] + k * (pts[i][1] + pts[i + 1][1])]
However that doesn't make a difference.
I would say that the following is an authoritative C implementation for the computation of the centroid of a simple polygon, it is written by Joseph O'Rourke, author of the book Computational Geometry in C.
/*
Written by Joseph O'Rourke
orourke#cs.smith.edu
October 27, 1995
Computes the centroid (center of gravity) of an arbitrary
simple polygon via a weighted sum of signed triangle areas,
weighted by the centroid of each triangle.
Reads x,y coordinates from stdin.
NB: Assumes points are entered in ccw order!
E.g., input for square:
0 0
10 0
10 10
0 10
This solves Exercise 12, p.47, of my text,
Computational Geometry in C. See the book for an explanation
of why this works. Follow links from
http://cs.smith.edu/~orourke/
*/
#include <stdio.h>
#define DIM 2 /* Dimension of points */
typedef int tPointi[DIM]; /* type integer point */
typedef double tPointd[DIM]; /* type double point */
#define PMAX 1000 /* Max # of pts in polygon */
typedef tPointi tPolygoni[PMAX];/* type integer polygon */
int Area2( tPointi a, tPointi b, tPointi c );
void FindCG( int n, tPolygoni P, tPointd CG );
int ReadPoints( tPolygoni P );
void Centroid3( tPointi p1, tPointi p2, tPointi p3, tPointi c );
void PrintPoint( tPointd p );
int main()
{
int n;
tPolygoni P;
tPointd CG;
n = ReadPoints( P );
FindCG( n, P ,CG);
printf("The cg is ");
PrintPoint( CG );
}
/*
Returns twice the signed area of the triangle determined by a,b,c,
positive if a,b,c are oriented ccw, and negative if cw.
*/
int Area2( tPointi a, tPointi b, tPointi c )
{
return
(b[0] - a[0]) * (c[1] - a[1]) -
(c[0] - a[0]) * (b[1] - a[1]);
}
/*
Returns the cg in CG. Computes the weighted sum of
each triangle's area times its centroid. Twice area
and three times centroid is used to avoid division
until the last moment.
*/
void FindCG( int n, tPolygoni P, tPointd CG)
{
int i;
double A2, Areasum2 = 0; /* Partial area sum */
tPointi Cent3;
CG[0] = 0;
CG[1] = 0;
for (i = 1; i < n-1; i++) {
Centroid3( P[0], P[i], P[i+1], Cent3 );
A2 = Area2( P[0], P[i], P[i+1]);
CG[0] += A2 * Cent3[0];
CG[1] += A2 * Cent3[1];
Areasum2 += A2;
}
CG[0] /= 3 * Areasum2;
CG[1] /= 3 * Areasum2;
return;
}
/*
Returns three times the centroid. The factor of 3 is
left in to permit division to be avoided until later.
*/
void Centroid3( tPointi p1, tPointi p2, tPointi p3, tPointi c )
{
c[0] = p1[0] + p2[0] + p3[0];
c[1] = p1[1] + p2[1] + p3[1];
return;
}
void PrintPoint( tPointd p )
{
int i;
putchar('(');
for ( i=0; i<DIM; i++) {
printf("%f",p[i]);
if (i != DIM - 1) putchar(',');
}
putchar(')');
putchar('\n');
}
/*
Reads in the coordinates of the vertices of a polygon from stdin,
puts them into P, and returns n, the number of vertices.
The input is assumed to be pairs of whitespace-separated coordinates,
one pair per line. The number of points is not part of the input.
*/
int ReadPoints( tPolygoni P )
{
int n = 0;
printf("Polygon:\n");
printf(" i x y\n");
while ( (n < PMAX) &&
(scanf("%d %d",&P[n][0],&P[n][1]) != EOF) ) {
printf("%3d%4d%4d\n", n, P[n][0], P[n][1]);
++n;
}
if (n < PMAX)
printf("n = %3d vertices read\n",n);
else printf("Error in ReadPoints:\too many points; max is %d\n",
PMAX);
putchar('\n');
return n;
}
The code solves the Exercise 12 at page 47 of the first edition of the book, a brief explanation is here:
Subject 2.02: How can the centroid of a polygon be computed?
The centroid (a.k.a. the center of mass, or center of gravity)
of a polygon can be computed as the weighted sum of the centroids
of a partition of the polygon into triangles. The centroid of a
triangle is simply the average of its three vertices, i.e., it
has coordinates (x1 + x2 + x3)/3 and (y1 + y2 + y3)/3. This
suggests first triangulating the polygon, then forming a sum
of the centroids of each triangle, weighted by the area of
each triangle, the whole sum normalized by the total polygon area.
This indeed works, but there is a simpler method: the triangulation
need not be a partition, but rather can use positively and
negatively oriented triangles (with positive and negative areas),
as is used when computing the area of a polygon. This leads to
a very simple algorithm for computing the centroid, based on a
sum of triangle centroids weighted with their signed area.
The triangles can be taken to be those formed by any fixed point,
e.g., the vertex v0 of the polygon, and the two endpoints of
consecutive edges of the polygon: (v1,v2), (v2,v3), etc. The area
of a triangle with vertices a, b, c is half of this expression:
(b[X] - a[X]) * (c[Y] - a[Y]) -
(c[X] - a[X]) * (b[Y] - a[Y]);
Code available at ftp://cs.smith.edu/pub/code/centroid.c (3K).
Reference: [Gems IV] pp.3-6; also includes code.
I did not study this algorithm, nor I tested it, but at a first glance it seems to me it is slightly different from the Wikipedia one.
The code from the book Graphics Gems IV is here:
/*
* ANSI C code from the article
* "Centroid of a Polygon"
* by Gerard Bashein and Paul R. Detmer,
(gb#locke.hs.washington.edu, pdetmer#u.washington.edu)
* in "Graphics Gems IV", Academic Press, 1994
*/
/*********************************************************************
polyCentroid: Calculates the centroid (xCentroid, yCentroid) and area
of a polygon, given its vertices (x[0], y[0]) ... (x[n-1], y[n-1]). It
is assumed that the contour is closed, i.e., that the vertex following
(x[n-1], y[n-1]) is (x[0], y[0]). The algebraic sign of the area is
positive for counterclockwise ordering of vertices in x-y plane;
otherwise negative.
Returned values: 0 for normal execution; 1 if the polygon is
degenerate (number of vertices < 3); and 2 if area = 0 (and the
centroid is undefined).
**********************************************************************/
int polyCentroid(double x[], double y[], int n,
double *xCentroid, double *yCentroid, double *area)
{
register int i, j;
double ai, atmp = 0, xtmp = 0, ytmp = 0;
if (n < 3) return 1;
for (i = n-1, j = 0; j < n; i = j, j++)
{
ai = x[i] * y[j] - x[j] * y[i];
atmp += ai;
xtmp += (x[j] + x[i]) * ai;
ytmp += (y[j] + y[i]) * ai;
}
*area = atmp / 2;
if (atmp != 0)
{
*xCentroid = xtmp / (3 * atmp);
*yCentroid = ytmp / (3 * atmp);
return 0;
}
return 2;
}
CGAL allow you to use an exact multi-precision number type instead of double or float in order to get exact computations, it will cost an execution time overhead, the idea is decsribed in The Exact Computation Paradigm.
A commercial implementation claims to use the Green's theorem, I do not know whether it use a multi-precision number type:
The area and centroid are computed by applying Green's theorem using
only the points on the contour or polygon
I think it refers to the Wikipedia algorithm since the formulae in Wikipedia are an application of the Green's theorem as explained here.

Converting A Recursive Function into a Non-Recursive Function

I'm trying to convert a recursive function into a non-recursive solution in pseudocode. The reason why I'm running into problems is that the method has two recursive calls in it.
Any help would be great. Thanks.
void mystery(int a, int b) {
if (b - a > 1) {
int mid = roundDown(a + b) / 2;
print mid;
mystery(a, mid);
mystery(mid + 1, b);
}
}
This one seems more interesting, it will result in displaying all numbers from a to (b-1) in an order specific to the recursive function. Note that all of the "left" midpoints get printed before any "right" midpoints.
void mystery (int a, int b) {
if (b > a) {
int mid = roundDown(a + b) / 2;
print mid;
mystery(a, mid);
mystery(mid + 1, b);
}
}
For example, if a = 0, and b = 16, then the output is:
8 4 2 1 0 3 6 5 7 12 10 9 11 14 13 15
The texbook method to turn a recursive procedure into an iterative one is simply to replace the recursive call with
a stack and run a "do loop" until the stack is empty.
Try the following:
push(0, 16); /* Prime the stack */
call mystery;
...
void mystery {
do until stackempty() { /* iterate until stack is empty */
pop(a, b) /* pop and process... */
do while (b - a >= 1) { /* run the "current" values down... */
int mid = roundDown(a+b)/2;
print mid;
push(mid+1, b); /* push in place of recursive call */
b = mid;
}
}
The original function had two recusive calls, so why only a single stack? Ignore the requirements for
the second recursive call and you can easily see
the first recursive call (mystery(a, mid);) could implemented as a simple loop where b assumes the value of mid
on each iteration - nothing else needs to be "remembered". So turn it into a loop and simply push
the parameters needed for the recusion onto a stack,
add an outer loop to run the stack down. Done.
With a bit of creative thinking, any recursive function can be turned into an iterative one using stacks.
This is what is happening. You have a long rod, you are dividing it into two. Then you take these two parts and divide it into two. You do this with each sub-part until the length of that part becomes 1.
How would you do that?
Assume you have to break the rod at mid point. We will put the marks to cut in bins for further cuts. Note: each part spawns two new parts so we need 2n boxes to store sub-parts.
len = pow (2, b-a+1) // +1 might not be needed
ar = int[len] // large array will memoize my marks to cut
ar[0] = a // first mark
ar[1] = b // last mark
start_ptr = 0 // will start from this point
end_ptr = 1 // will end to this point
new_end = end_ptr // our end point will move for cuts
while true: //loop endlessly, I do not know, may there is a limit
while start_ptr < end_ptr: // look into bins
i = ar[start_ptr] //
j = ar[start_ptr+1] // pair-wise ends
if j - i > 1 // if lengthier than unit length, then add new marks
mid = floor ( (i+j) / 2 ) // this is my mid
print mid
ar[++new_end] = i // first mark --|
ar[++new_end] = mid - 1 // mid - 1 mark --+-- these two create one pair
ar[++new_end] = mid + 1 // 2nd half 1st mark --|
ar[++new_end] = j // end mark --+-- these two create 2nd pair
start_ptr = start_ptr + 2 // jump to next two ends
if end_ptr == new_end // if we passed to all the pairs and no new pair
break // was created, we are done.
else
end_ptr = new_end //else, start from sub prolem
PS: I haven't tried this code. This is just a pseudo code. It seems to me that it should do the job. Let me know if you try it out. It will validate my algorithm. It is basically a b-tree in an array.
This example recursively splits a range of numbers until the range is reduced to a single value. The output shows the structure of the numbers. The single values are output in order, but grouped based on the left side first split function.
void split(int a, int b)
{
int m;
if ((b - a) < 2) { /* if size == 1, return */
printf(" | %2d", a);
return;
}
m = (a + b) / 2; /* else split array */
printf("\n%2d %2d %2d", a, m, b);
split(a, m);
split(m, b);
}

Find a simple cycle in a weighted undirected graph whose length lies in a given user defined range

EDITED - 12/03/12 # 1:05 AM PST
I have edited my code as follows. However, I am still not getting it to return any paths.
Again, this code is to compute a path, with a specified starting vertex and distance by the user. The program is to return all of the appropriate paths which match the specified data.
Here is my code this far:
vector<vector<Vertex>> Graph::FindPaths(Graph &g, int startingIntersection, float distanceInMiles)
{
/* A vector which contains vectors which will contain all of the suitable found paths. */
vector<vector<Vertex>> paths;
/* Create an empty set to store the visited nodes. */
unordered_set<int> visited;
/* Vector which will be used to the hold the current path. */
vector<Vertex> CurrentPathList;
/* Will be used to store the currerntVertex being examined. */
Vertex currentVertex;
/* Will be used to store the next vertex ID to be evaluated. */
int nextVertex;
/* Will be used to determine the location of the start ID of a vertex within the VertexList. */
int start;
/* Stack containing the current paths. */
stack<Vertex> currentPaths;
/* CurrentPathDistance will be used to determine the currernt distance of the path. */
float currentPathDistance = 0;
/* The startingIntersection location must be found within the VertexList. This is because there is
* no guarantee that the VertexList will hold sequential data.
*
* For example, the user inputs a startingIntersection of 73. The Vertex for intersection #73 may
* be located at the 20th position of the VertexList (i.e. VertexList[20]). */
start = g.FindStartingIntersection(g, startingIntersection);
/* Push the startingIntersection onto the stack. */
currentPaths.push(g.VertexList[start]);
/* Continue to iterate through the stack until it is empty. Once it is empty we have exhaused all
* possible paths. */
while(!currentPaths.empty())
{
/* Assign the top value of the stack to the currentVertex. */
currentVertex = currentPaths.top();
/* Pop the top element off of the stack. */
currentPaths.pop();
/* Check to see if we are back to the startingIntersection. As a note, if we are just starting, it will
* put the startingIntersection into the paths. */
if(currentVertex.id == startingIntersection)
{
/* Add currentVertex to a list. */
CurrentPathList.push_back(currentVertex);
/* Find the current path distance. */
currentPathDistance = FindPathDistance(g, CurrentPathList);
/* Check the currentPathDistance. If it is within +/- 1 mile of the specified distance, then place
* it into the vector of possible paths. */
if((currentPathDistance + 1 >= distanceInMiles) && (currentPathDistance - 1 <= distanceInMiles))
{
paths.push_back(CurrentPathList);
}
}
else /* The ending vertex was not the user specified starting vertex. */
{
/* Remove all elements from the stack. */
while(!currentPaths.empty())
{
currentPaths.pop();
}
}
nextVertex = FindUnvisitedNeighbor(g, currentVertex, visited);
// repeat while current has unvisited neighbors
while(nextVertex != -1)
{
/* Find the new starting vertex. */
start = g.FindStartingIntersection(g, nextVertex);
/* Push the startingIntersection onto the stack. */
currentPaths.push(g.VertexList[start]);
/* Push the next vertex into the visted list. */
visited.insert(nextVertex);
nextVertex = FindUnvisitedNeighbor(g, currentVertex, visited);
}
}
/* Return the vector of paths that meet the criteria specified by the user. */
return paths;
My code for FindingUnvistedNeighbor() is as follows:
int FindUnvisitedNeighbor(Graph &g, Vertex v, unordered_set<int> visited)
{
/* Traverse through vertex "v"'s EdgeList. */
for(int i = 0; i + 1 <= v.EdgeList.size(); i++)
{
/* Create interator to traverse through the visited list to find a specified vertex. */
unordered_set<int>::const_iterator got = visited.find(v.EdgeList[i].intersection_ID_second);
/* The vertex was not found in the visited list. */
if(got == visited.end())
{
return v.EdgeList[i].intersection_ID_second;
}
}
return -1;
}
This seems like a fundamentally algorithmic problem, rather than an implementation-specific one, so I have provided detailed, high-level pseudocode for the algorithm rather than actual code. Also, I don't know C++. Let me know if any of the syntax/logic is unclear, and I can clarify some more. It essentially does a DFS, but doesn't stop when it finds the value: it continues searching, and reports all paths to the value (which meet the given distance criterion).
// Input: Graph G, Vertex start, Integer targetDistance
// Output: Set< List<Vertex> > paths
FindPaths ( G, start, targetDistance ):
create an empty set, paths
create an empty set, visited
create an empty stack, currentPath
// Iterative Exhaustive DFS
push start on currentPath
while ( currentPath is not empty ):
current = pop ( currentPath )
if ( current equals start ):
copy currentPath to a list, L (reversed order doesn't matter)
// find its weight
w = FindPathWeight ( L, G )
if ( w+1 >= targetDistance AND w-1 <= targetDistance ):
add L to paths
else if ( current is a dead-end ): drop it completely, it's useless
x = FindUnvisitedNeighbor ( current, G, visited )
// repeat while current has unvisited neighbors
while ( x ):
push x on currentPath
add x to visited
x = FindUnvisitedNeighbor ( current, G, visited )
return paths
//EndFindPaths
// Input: List path, Graph G
// Output: Integer weight
FindPathWeight ( path, G ):
Integer weight = 0;
for each ( Vertex v in path ):
if ( v is the end of the path ): break
Consider the next vertex in the list, u
Find the edge v——u in the graph, call it e
Get the weight of e, w
weight = weight + w
return weight
//EndFindPathWeight
// Input: Vertex v, Graph G, Set visited
// Output: Vertex u (or null, if there are none)
FindUnvisitedNeighbor ( v, G, visited ):
for each ( Edge v——u in G.EdgeList ):
if ( u in visited ): continue
// u is the first unvisited neighbor
return u
return null
//EndFindUnvisitedNeighbor
depth first is fine. you have to abort when
* path too long (bad)
* vertex already visited (bad)
* starting vertex visited (found a solution)
for detecting these conditions you have to keep track of the so far visited edges/vertices
the depth firs walk looks like this (completely unchecked) pseudocode
anyway you should get the idea
edge
pair of node
length
node
list of edge
arrow // incoming edge, node
pair of edge, node
path
list of arrow
check_node(arrow) // incoming edge, node
current_path.push(arrow)
if length(current_path) > limit
// abort too long
current_path.pop(arrow)
return
if length(current_path) > 0 && current_path.first.node == current_path.last,node
// found solution
solutions.push(current_path) // store the found solution and continue search
current_path.pop(arrow)
return
if node in current_path
// abort cycle
current_path.pop(arrow)
return
for each edge in node
// go deeper
check(arrow(edge, edge.other_node))
current_path.pop(arrow)
return
main
path current_path
list of path solutions // will hold all possible solutions after the check
make_graph
check_node(NULL, graph.start)
for each solution in solutions // print 0 .. n solutions
print solution

Resources