Given a mesh face, find its neighboring faces - three.js

I'm trying to efficiently find all neighboring faces of a given face.
I'm taking a sly approach, but I wonder if can be improved upon.
The approach I've taken so far is to create a data structure after my mesh geometry is created. I build a hash of arrays to map vertices to the faces that are comprised of them:
var vertexToFace = [];
function crossReference(g) {
for (var fx = 0; fx < g.faces.length; fx++) {
vertexToFace[fx] = new Array();
}
for (var fx = 0; fx < g.faces.length; fx++) {
var f = g.faces[fx];
var ax = f.a;
var bx = f.b;
var cx = f.c;
vertexToFace[ax].push(fx);
vertexToFace[bx].push(fx);
vertexToFace[cx].push(fx);
}
}
Now that I have the hash of arrays, I can retrieve a given face's neighbors:
var neighbors = [];
neighbors.push( vertexToFace(face.a), vertexToFace(face.b), vertexToFace(face.c) );
This works fine, but I'm wondering if its over-kill. I know that each face in geometry.faces contains members a,b,c that are indices into geometry.vertices.
I don't believe the reverse information is stored, although, tantalizingly, each vertex in geometry.vertices does have the member .index, but it doesn't seem to correspond to faces.
Am I missing something obvious?
Thanks/.

I think that
for (var fx = 0; fx < g.faces.length; fx++) {
vertexToFace[fx] = new Array();
}
should be changed to
for (var fx = 0; fx < g.vertices.length; fx++) {
vertexToFace[fx] = new Array();
}
since otherwise, it seems that there won't be enough elements in vertexToFace if the number of vertices is greater than the number of faces. You can also simplify a bit with
vertexToFace[fx] = [];

To me it depends on your use case. If you wanna do this just once then to me it's indeed and overkill and I would just iterate through the faces to find the neighbours. Cost is O(#G) to speak in complexity terms #G being the number of faces. So not bad. The next time you do it cost will again be O(#G).
In your case cost is O(#G) to create the structure and then for retrieving neighbors it depends on array implementation but probably O(log(#V)) where #V is number of vertices. So first retrieval cost is O(#G+log(#V)) so > O(#G) Next retrieval however is again O(log(#V)) so in case you have lots of those retrieval operations it seems to make sense to use your way in most cases. However you will need to keep track when the geometry is changed as this would invalidate your structure. So as so often - it depends...
Just a side note - question is also what you consider a neighbor. One vertex in common or and edge... Just mentioning it cause I just had a case where it was the later.

Related

Can I efficiently construct a Voronoi diagram / Delaunay mesh from a subset of points?

I have a problem where I have a large number (~10,000) points (in 2-D) from which I need to repeatedly pick a small number (~100) and construct a Voronoi diagram.
I can pre-compute the Voronoi diagram / Delaunay mesh for the 10000 points which always remain the same. Is there then a way to efficiently compute the Voronoi diagram for a small subset of these points? Or do I need to start from scratch every time?
Many thanks!
Generally speaking, you see the term "dynamic algorithms" used to describe the process of taking an algorithm where the input is typically always known up front and modify it to handle the case where the underlying data change. In your case, you're looking for "dynamic Voronoi diagrams," which are data structures that maintain Voronoi diagrams even as nodes are added and deleted.
I am not particularly familiar with dynamic computational geometry algorithms, but a bit of Googling turned up a couple of hits for "dynamic Voronoi diagrams," including this paper by Gowda, Kirkpatrick, et al describing one approach. It may not end up being faster in your case than just computing the full Voronoi diagram, but it could be useful as a starting point for a search.
For such a small input set (100 vertices) just building a Delauany mesh/Voronoi diagram from scratch should be reasonably fast. While I don't pretend to have an exhaustive knowledge of the algorithms that exist today, my experience with the Delaunay is that the logic for removing vertices from a mesh tends to be more expensive than adding to the mesh. So the approach of creating a mesh of 100 points from a mesh of 10000 by removing vertices probably would not succeed.
Here's a sample of Java code using the Tinfour library. Building 1000 Voronoi diagrams required only 277 milliseconds on a middle-of-the-road laptop computer. The focus of the Tinfour library is the Delaunay rather than the Voronoi, but it does include a so-so class for building a Voronoi from a Delaunay mesh. I think that any good computational geometry library you find should yield similar performance.
public static void main(String[] args) {
int seed = 0;
List<Vertex> masterList = TestVertices.makeRandomVertices(10000, seed);
int nTrials = 1000;
int nVerticesInSubset = 100;
int nPolygons = 0;
Random random = new Random(seed);
long time0 = System.nanoTime();
for (int iTrial = 0; iTrial < nTrials; iTrial++) {
// we wish to build a subset of N unique vertices.
// the bitSet allows us to avoid randomly selecting
// a vertex more than once.
BitSet bitSet = new BitSet(10000);
ArrayList<Vertex> subList = new ArrayList<>();
for (int i = 0; i < nVerticesInSubset; i++) {
while (true) {
int index = random.nextInt(masterList.size());
// the random index is a value in the range 0 to 10000.
// see if the corresponding vertex has already been selected
// if not, add it to the subList. if so, keep looking
if (!bitSet.get(index)) {
subList.add(masterList.get(index));
bitSet.set(index);
break;
}
}
}
IncrementalTin tin = new IncrementalTin(0.001);
tin.add(subList, null);
BoundedVoronoiDiagram voronoi = new BoundedVoronoiDiagram(tin);
nPolygons += voronoi.getPolygons().size();
}
long time1 = System.nanoTime();
System.out.println("Elapsed time in milliseconds " + (time1 - time0) / 1.0e+6);
System.out.println("Avg. polygons in Voronoi " + ((double) nPolygons / nTrials));
}

Three js scrolling mesh colors and Z positions on bufferedGeometry

I am working with a three js indexed buffered geometry that is in need of some performance improvements.
The visualization I'm working on is essentially a 3D waterfall visualization where each time I receive a new "row" of data I will copy each vertex color and Z position to the row above it, like so:
for(let i = this.geometry.attributes.position.count - 1; i >= this.frameWidth; i--){
let former = i - this.frameWidth;
this.geometry.attributes.position.setZ(i, this.geometry.attributes.position.getZ(former) );
this.geometry.attributes.color.setXYZ(i, this.geometry.attributes.color.getX(former), this.geometry.attributes.color.getY(former), this.geometry.attributes.color.getZ(former));
}
Followed by entering the new Row of data where the frame variable corresponds to the new row to be entered:
for (let i = 0; i < this.frameWidth; i += 1) {
const color = this.colorMap.current.getColor(frame[i]);
this.geometry.attributes.color.setXYZ(i, color.r, color.g, color.b);
this.geometry.attributes.position.setZ(i, frame[i]);
}
this.geometry.attributes.color.needsUpdate = true;
this.geometry.attributes.position.needsUpdate = true;
Being someone whom is more familiar with the works of GPU programming it really seems to me that these for loops can be taken out or done in parallel in some way to vastly improve performance. Is there something I'm missing with threeJS that does these sorts of operations that I'm not aware of?
A second idea that I have is to actually translate the position of the mesh in the Y axes (eliminating the need to copy color and position entries) and re-purposing the vertices of the last row to be the "new" front row where the new data is placed. However it seems that this requires some sort of modification of indices that I'm not exactly sure how it works.
Any ideas and suggestions are extremely helpful :) Thanks!

How to group shapes into sets of sets of overlapping shapes

Please note I am interested in the most efficient way to solve the problem, and not looking for a recommendation to use a particular library.
I have a large number (~200,000) of 2D shapes (rectangles, polygons, circles, etc) that I want to sort into overlapping groups. For example, in the picture, the green and orange would be marked as group 1 and the black, red, and blue would be marked as group 2.
Example
Let's say I will be given a list<Shape>. A slow solution would be:
(I haven't run this code - just an example algorithm)
// Everything initialized with groupid = 0
int groupid = 1;
for (int i = 0; i < shapes.size(); ++i)
{
if (shapes[i].groupid)
{
// The shape hasn't been assigned a group yet
// so assign it now
shapes[i].groupid = groupid;
++groupid;
}
// As we compare shapes, we may find that a shape overlaps with something
// that was already assigned a group. This keeps track of groups that
// should be merged.
set<int> mergingGroups = set<int>();
// Compare to all other shapes
for (int j = i + 1; j < shapes.size(); ++j)
{
// If this is one of the groups that we are merging, then
// we don't need to check overlap, just merge them
if (shapes[j].groupid && mergingGroups.contains(shapes[j].groupid))
{
shapes[j].groupid = shapes[i].groupid;
}
// Otherwise, if they overlap, then mark them as the same group
else if (shapes[i].overlaps(shapes[j]))
{
if (shapes[j].groupid >= 0)
{
// Already have a group assigned
mergingGroups.insert(shapes[j].groupid;
}
// Mark them as part of the same group
shapes[j].groupid = shapes[i].groupid
}
}
}
A faster solution would be to put the objects into a spacial tree to reduce the number of j object overlap comparisons (the inner loop), but I would still have to iterate over the rest to merge groups.
Is there anything faster?
Thanks!
Hopefully this helps someone - this is what I actually implemented (in pseudocode).
tree = new spatial tree
for each shape in shapes
set shape not in group
add shape to tree
for each shape in shapes
if shape in any group
skip shape
cur_group = new group
set shape in cur_group
regions = new stack
insert bounds(shape) into regions
while regions has items
cur_bounds = pop(regions)
for test_shape in find(tree, cur_bounds)
if test_shape has group
skip test_shape
if overlaps(any in cur_group, test_shape)
insert bounds(tester) into regions
set test_shape group = cur_group
If you effectively find all pairwise intersections with a spatial tree, you can use union-find algorithm to group objects

Drawing shaped based on points making sure lines don't cross

say, I have 100 points and want to draw a closedcurve (I'm using C# and graphics), like this:
Graphics g = this.CreateGraphics();
Pen pen = new Pen(Color.Black, 2);
Point[] points = new Point[DrawingPoints];
for (int x = 0; x < DrawingPoints; x++)
{
int px = r.Next(0, MaxXSize);
int py = r.Next(0, MaxYSize);
Point p = new Point(px, py);
points[x] = p;
}
g.DrawClosedCurve(pen, points);
It is connecting points as they get into points[] and lines cross - you will not get a solid figure with this.
Is there an algorithm that will help me toss the points to get a solid figure? Here's a picture below (tried as hard as I could hehe) to help visualize what I'm asking for.
Well, in O(n log n) time, you could compute the centroid of the points and sort them in order of angle about that centroid, leaving a star-shaped polygon. That's efficient but probably messes up the order of your points too much.
I think you'd be happier with the results of a 2-OPT method for TSP (description here). 2-OPT is worst-case exponential but polynomial-time in practice.

Datastructure and algorithm to detect collisions of irregular shaped moving objects

I came across this interview question
Many irregularly shaped objects are moving in random directions. Provide a data structure and algorithm to detect collisions. Remember that the number of objects is in the millions.
I am assuming that every object would have an x and y coordinate. Other assumptions are most welcome. Also a certain kind of tree should be used, I suppose, but I am clueless about the algorithm.
Any suggestions?
I would have a look at the Plane Sweep Algorithm or the Bently-Ottmann Algorithm. It uses plane sweep to determine in O(n log(n)) time (and O(n) space) the intersection of lines on a euclidian plane.
Most likely what you want is to sub-divide the plane with a space-filling-curve like a z-curve or a hilbert-curve and thus reducing the complexity of a 2D problem to a 1D problem. Look for quadtree.
Link: http://dmytry.com/texts/collision_detection_using_z_order_curve_aka_Morton_order.html
There are many solutions to this problem. First: Use bounding boxes or circles (balls in 3D). If the bounding boxes do not intersect then no further tests are needed. Second: Subdivide your space. You do not have to test every object against all other objects (that is O(n^2)). You can have an average complexity of O(n) with quadtrees.
I guess there should be a loop which takes reference of 1 object find co-ordinates and then checks with rest of all other objects to see if there is any collision. I am not sure how good my solution is for millions of objects.
Psuedo-code:
For each irregular shaped object1
int left1, left2;
int right1, right2;
int top1, top2;
int bottom1, bottom2;
bool bRet = 1; // No collision
left1 = object1->x;
right1 = object1->x + object1->width;
top1 = object1->y;
bottom1 = object1->y + object1->height;
For each irregular shaped object2
{
left2 = object2->x;
right2 = object2->x + object2->width;
top2 = object2->y;
bottom2 = object2->y + object2->height;
if (bottom1 < top2) bRet =0;
if (top1 > bottom2) bRet = 0;
if (right1 < left2) bRet = 0;
if (left1 > right2) bRet = 0;
}
return bRet;

Resources