I was wondering whether there is an algorithm for getting the coordinates of circles like on an Apple Watch.
I would like to have a function that I enter the number of circles I would like. Then it loops round the origin in a hexagonal fashion adding the circles. For example if I input 6 the following image is produced.
One way to do this would be to formulate this as a graph search problem. Imagine making a graph of hexagons, with the center hexagon at (0, 0). To find the positions of the n hexagons you want, run a BFS-like algorithm as follows:
Seed a queue with the point (0, 0), adding it to the list of points.
While fewer than n points have been generated:
Dequeue a point from the queue.
For each of its six hexagonal neighbors, if that point hasn’t yet been generated, generate it and add it to the queue.
The math of going from a hexagon to its neighbor is mostly trig; just move d units at 60n° for n = 0, 1, …, 5.
To ensure that you have things radiating out from the center, when visiting the neighbors or a point, consider sorting them clockwise or counterclockwise around the perimeter of the hexagon.
Related
This question is an extension on some computation details of this question.
Suppose one has a set of (potentially overlapping) circles, and one wishes to compute the area this set of circles covers. (For simplicity, one can assume some precomputation steps have been made, such as getting rid of circles included entirely in other circles, as well as that the circles induce one connected component.)
One way to do this is mentioned in Ants Aasma's and Timothy's Shields' answers, being that the area of overlapping circles is just a collection of circle slices and polygons, both of which the area is easy to compute.
The trouble I'm encountering however is the computation of these polygons. The nodes of the polygons (consisting of circle centers and "outer" intersection points) are easy enough to compute:
And at first I thought a simple algorithm of picking a random node and visiting neighbors in clockwise order would be sufficient, but this can result in the following "outer" polygon to be constructed, which is not part of the correct polygons.
So I thought of different approaches. A Breadth First Search to compute minimal cycles, but I think the previous counterexample can easily be modified so that this approach results in the "inner" polygon containing the hole (and which is thus not a correct polygon).
I was thinking of maybe running a Las Vegas style algorithm, taking random points and if said point is in an intersection of circles, try to compute the corresponding polygon. If such a polygon exists, remove circle centers and intersection points composing said polygon. Repeat until no circle centers or intersection points remain.
This would avoid ending up computing the "outer" polygon or the "inner" polygon, but would introduce new problems (outside of the potentially high running time) e.g. more than 2 circles intersecting in a single intersection point could remove said intersection point when computing one polygon, but would be necessary still for the next.
Ultimately, my question is: How to compute such polygons?
PS: As a bonus question for after having computed the polygons, how to know which angle to consider when computing the area of some circle slice, between theta and 2PI - theta?
Once we have the points of the polygons in the right order, computing the area is a not too difficult.
The way to achieve that is by exploiting planar duality. See the Wikipedia article on the doubly connected edge list representation for diagrams, but the gist is, given an oriented edge whose right face is inside a polygon, the next oriented edge in that polygon is the reverse direction of the previous oriented edge with the same head in clockwise order.
Hence we've reduced the problem to finding the oriented edges of the polygonal union and determining the correct order with respect to each head. We actually solve the latter problem first. Each intersection of disks gives rise to a quadrilateral. Let's call the centers C and D and the intersections A and B. Assume without loss of generality that the disk centered at C is not smaller than the disk centered at D. The interior angle formed by A→C←B is less than 180 degrees, so the signed area of that triangle is negative if and only if A→C precedes B→C in clockwise order around C, in turn if and only if B→D precedes A→D in clockwise order around D.
Now we determine which edges are actually polygon boundaries. For a particular disk, we have a bunch of angle intervals around its center from before (each sweeping out the clockwise sector from the first endpoint to the second). What we need amounts to a more complicated version of the common interview question of computing the union of segments. The usual sweep line algorithm that increases the cover count whenever it scans an opening endpoint and decreases the cover count whenever it scans a closing endpoint can be made to work here, with the adjustment that we need to initialize the count not to 0 but to the proper cover count of the starting angle.
There's a way to do all of this with no trigonometry, just subtraction and determinants and comparisons.
I have a set of random points in a plane and I want to put another point at the most "sparse" position.
For example, if there are some points in 0 < x < 10 and 0 < y < 10:
# this python snippet just generates the plot blow.
import matplotlib.pyplot as plt
# there are actually a lot more, ~10000 points.
xs = [8.36, 1.14, 0.93, 8.55, 7.49, 6.55, 5.13, 8.49, 0.15, 3.48]
ys = [0.65, 6.32, 2.04, 0.51, 4.5, 7.05, 1.07, 5.23, 0.66, 2.54]
plt.xlim([0, 10])
plt.ylim([0, 10])
plt.plot(xs, ys, 'o')
plt.show()
Where should I put a new point in this plane so that the new point becomes the most distant from the others? Please note that I want to maximise the minimum distance to another point, but not to maximise the average distance to all other points (Thanks to user985366's comment).
"How can I find the farthest point from a set of existing points?" is the one at least I could find, but I'm not sure if the page directly solves my situation (actually the linked case looks more complicated than my case).
[edit] By the way, I noticed that general constrained global optimization can find a possible solution (if I add a point at each corner) [4.01, 5.48] in this case, but I think it doesn't work if there are a lot more, say ~10000 points.
Your problem can be solved by computing the Voronoi diagram of the set of points. This is a division of the plane into regions such that there is one region per point in the original set, and within that region, the corresponding point is closer than other points from the set.
The boundaries of these regions are straight lines such that any point on that line is equidistant from the two points corresponding to the regions which meet at that boundary. The vertices where multiple boundaries meet are therefore equidistant to at least three points from the original set.
The sparsest point in the plane is either a vertex in the Voronoi diagram, or the intersection of an edge in the Voronoi diagram with the boundary of the plane, or one of the corners of the plane. The Voronoi diagram can be computed by standard algorithms in O(n log n) time; after this, the sparsest point can then be found in linear time, since you know which Voronoi regions each vertex/edge is adjacent to, and hence which point from the original set to measure the distance to.
Does anyone know a relatively fast algorithm for decomposing a set of polygons into their distinct overlapping and non-overlapping regions, i.e. Given a set of n polygons, find all the distinct regions among them?
For instance, the input would be 4 polygons representing circles as shown below
and the output will be all polygons representing the distinct regions shown in the different colours.
I can write my own implementation using polygon operations but the algorithm will probably be slow and time consuming. I'm wondering if there's any optimised algorithm out there for this sort of problem.
Your problem in called the map overlay problem. It can be solved in O(n*log(n)+k*log(k)) time, where n is the number of segments and k is the number of segment intersections.
First you need to represent your polygons as a doubly connected edge list, different faces corresponding to the interiors of different polygons.
Then use the Bentley–Ottmann algorithm to find all segment intersections and rebuild the edge list. See: Computing the Overlay of Two Subdivisions or Subdivision representation and map overlay.
Finally, walk around each cycle in the edge list and collect faces of that cycle's half-edges. Every set of the faces will represent a distinct overlapping region.
See also: Shapefile Overlay Using a Doubly-Connected Edge List.
I don't think it is SO difficult.
I have answered the similar question on the friendly site and it was checked by a smaller community:
https://cs.stackexchange.com/questions/20039/detect-closed-shapes-formed-by-points/20247#20247
Let's look for a more common question - let's take curves instead of polygons. And let's allow them to go out of the picture border, but we'll count only for simple polygons that wholly belong to the picture.
find all intersections by checking all pairs of segments, belonging to different curves. Of course, filter them before real check for intersection.
Number all curves 1..n. Set some order of segments in them.
For every point create a sequence of intersections SOI, so: if it starts from the border end, SOI[1] is null. If not, SOI[1]= (number of the first curve it is intersecting with, the sign of the left movement on the intersecting curve). Go on, writing down into SOI every intersection - number of curve if there is some, or 0 if it is the intersection with the border.
Obviously, you are looking only for simple bordered areas, that have no curves inside.
Pieces of curves between two adjacent non-null intersection points we'll call segments.
Having SOI for each curve:
for segment of the curve 1, starting from the first point of the segment, make 2 attempts to draw a polygon of segments. It is 2 because you can go to 2 sides along the first intersecting curve.
For the right attempt, make only left turns, for the left attempt, make only the right turns.
If you arrive at point with no segment in the correct direction, the attempt fails. If you return to the curve 1, it success. You have a closed area.
Remember all successful attempts
Repeat this for all segments of curve 1
Repeat this for all other curves, checking all found areas against the already found ones. Two same adjacent segments is enough to consider areas equal.
How to find the orientation of the intersection.
When segment p(p1,p2) crosses segment q(q1,q2), we can count the vector multiplication of vectors pXq. We are interested in only sign of its Z coordinate - that is out of our plane. If it is +, q crosses p from left to right. If it is -, the q crosses p from right to left.
The Z coordinate of the vector multiplication is counted here as a determinant of matrix:
0 0 1
p2x-p1x p2y-p1y 0
q2x-q1x q2y-q1y 0
(of course, it could be written more simply, but it is a good memorization trick)
Of course, if you'll change all rights for lefts, nothing really changes in the algorithm as a whole.
Suppose I have a set of points in the Cartesian plane, defined by an array/vector of (X,Y) coordinates. This set of points will be "contiguous" in the coordinate plane, if any set of discontinuous points can be contiguous. That is, these points originated as a rectangular grid in which regions of points were eliminated by a prior algorithm. The shape outlined by the points is arbitrary, but it will tend to have arcs for edges.
Suppose further that I can create circles of fixed radius r.
I would like an algorithm that will find me the center X,Y for a circle that will enclose as close to exactly half of the given points as possible.
OK, try this (sorry if I have very bad wording: I didn't learn my Maths in english)
Step 1: Find axis
For all pairs of points, that are less than 2r apart calculate how many points lie on either side of the connecting line
Chose the pair with the worst balance
Calculate the line, that bisects these two points as an axis ("Axis of biggest concavity")
Step 2: Find center
Start on the axis far (>2r) away on the side, that had the lower point count in step 1 (The concave side)
Move the center on the axis, until you reach the desired point. This can be done by moving up with a step of sqrt(delta), where delta is the smallest distance between 2 points in the set, if overreaching move back halfing the step, etc.
You might want to look into the algorithm for smallest enclosing circle of a point set.
A somewhat greedy algorithm would be to simply remove points 1 at a time until the circle radius is less or equal to r.
Normal marching cubes finds 12 edges per cube, but you can do 3 edges per cube, save the edges inside an array, and then go through the cubes again, referencing the edges from the cubes adjacent rather than calculating them.
The process to reference adjacent cubes isn't clearly discussed on the Internet so anyone using marching cubes would be welcome to help find the details of the solution. do you know an implementation already?
here is a picture showing the 3 edges in yellow that you need for each cube, instead of 12.
EDIT- I just found this solution, although it's just a part of it:
Imagine 3 edges coming from the corner of the cube with lowerest coordinates. Then all other edges just belong to other cubes. If our cube has coordinates (x,y,z), the neiboring cubes have coordinates (x+1,y,z), (x,y+1,z), (x,y,z+1), (x+1,y+1,z), (x+1,y,z+1), (x,y+1,z+1). You can imagine the edge as a vector. Then the corner of the cube have edges (1,0,0), (0,1,0), (0,0,1). The cube with coordinates (x+1,y,z) have edges (0,1,0) and (0,0,1) that belong to our cube. The cube (x+1,y+1,z) has only one edge (0,0,1) that belongs to our cube. So if you store 4 elements for the cube you can access them like that:
edge1 = cube[x][y][z][0];
edge2 = cube[x][y][z][1];
edge3 = cube[x][y][z][2];
edge4 = cube[x+1][y][z][1];
edge5 = cube[x+1][y][z][2];
edge6 = cube[x][y+1][z][0];
edge7 = cube[x][y+1][z][2];
edge8 = cube[x][y][z+1][0];
edge9 = cube[x][y][z+1][1];
edge10 = cube[x+1][y+1][z][2];
edge11 = cube[x+1][y][z+1][1];
edge12 = cube[x][y+1][z+1][0];
Now which points edge7 connect? The answer is (x,y+1,z) and (x,y+1,z)+(0,0,1)=(x,y+1,z+1).
Now which cubes edge7 connect? It is more harder. We see that coordinate z is changes along the edge this means that neibour cube has the same z coordinate. Now all others coordinates change. Where we have +1, the cube has large coordinate. Where we have +0, the cube has smaller coordinates. So the edge connects cubes (x,y,z) and (x-1,y+1,z). Other 2 cubes that has the same edge are (x,y+1,z) and (x-1,y,z).
-=-=-=-=-=-=-=-=-=-=-=--=-=-=-=-=-=-=--=-=
EDIT2-
So I am doing this, and it isn't so simple. I have a loop which simultaneously calculate 8 points, 12 edges, the interpolation of edges, the bit values and a vertex the values for the edges, all in one loop.
so I am doing a new loop previous to it to calculate as much as possible and place it in arrays to used in the complicated loop.
I can recycle the interpolated values of the intersection points along edges, in an array, although I will have to recalculate all the points again in the complicated loop, because the values of the points I used to decide bit numbers that reference values in the vertex table. That confuses me! I thought that once I have the edge intersection values, I could use those directly to get the triangle tables, without having to calculate the points all over again!
in fact no.
anyway, here is another bit of information with someone that already did it, if only it was readable!
http://www.new-npac.org/projects/sv2all/sv2/vtk/patented/vtkImageMarchingCubes.cxx
scroll to this line: Cubes are responsible for edges on their min faces.
A simple way to reduce edge calculations in the way you are suggesting is to compute cubes one axis aligned plane at a time.
If you kept all of the cubes, with their edges, in memory, it would be easy to compute each edge only once and to find adjacent edges by indexing. However, you usually don't want to keep all the cubes in memory at once because of the space requirements.
A solution to this is to compute one plane of cubes at a time. i.e. an axis aligned cross-section, starting from one side and progressing to the opposite side. You then only need to keep at most two full planes of cubes in memory at a time. As you move through each plane you can reference shared edges in the previous plane and previously computed cubes in the current plane. As you move to the next plane you can deallocate the plane you will no longer need.
Edit: This article discusses doing just what I suggest:
http://alphanew.net/index.php?section=articles&site=marchoptim&lang=eng
Funny, because when I implemented my own MCs I came up with similar solution.
When you start working with MCs you treat them as a distinct cubes but if you want to go for high performance you'll need to create entire mesh as a whole, and creating vertex indices etc. is not so easy here. It gets even more interesting when you want to add smooth per-vertex normals :).
To solve this I created a simple index cache mechanism to store vertex indices for each edge.
Then, for each computed edge I have cube position x,y,z and edge index and I do as follows:
For each axis:
if the edge is on '+' side of axis:
replace edge index with its '-' side sibling
increment cube position along axis
This simple operation gives me the correct cube position, and edge index of 0,1,2. Then I compute a total cache index from x,y,z,edgeIndex values with simple bit rotations.
When I have cache index I check if it's bigger than -1. If it is then there was an already computed vertex at this edge and I can reuse it. If it's -1 I need to create a new vertex and store its index in the cache. This way you'll compute each vertex only once, and you can even add a normal value shared between every triangle containing your vertex.
Yes, I think I do it similar to kolenda. I have a struct with 5 ints: (cube)index and 4 vertexindices (A, B, C, D).
for the most inner loop (x), I have just lastXCache and nextXCache. On the 4 edges pointing in the -x direction, i ask if lastXCache.A != -1 and if so, assign the previously calculated value, etc.
In the +x direction I store calculated vertices in nextXCache. when cube is done: lastXCache = nextXCache;
For y and z direction it needs to be a list (unity term for mutable array), next y is next row (so sizex) and next z is the next plane (so sizex * sizey)
only diadvantage is that this way it has to run cube after cube, so serially. But you can calculate different chunks in parallel.
Another way I thought of that could be more parallel would need 2 passes: 1. calculate 3 edges every cube, when 1 is done -> 2. draw the triangles.
Don't really know what is better, but the way it actually works seems to be fast enough. even better with unity jobs. Create one IJob for 1 chunk/mesh.