given a 3D grid, a 3d point as sphere center and a radius, i'd like to quickly calculate all cells contained or intersected by the sphere.
Currently i take the the (gridaligned) boundingbox of the sphere and calculate the two cells for the min anx max point of this boundingbox. then, for each cell between those two cells, i do a box-sphere intersection test.
would be great if there was something more efficient
thanks!
There's a version of the Bresenham algorithm for drawing circles. Consider the two dimensional place at z=0 (assume the sphere is at 0,0,0 for now), and look at only the x-y plane of grid points. Starting at x= R, y=0, follow the Bresenham algorithm up to y = y_R, x=0, except instead of drawing, you just use the result to know that all grid points with lower x coordinates are inside the circle, down to x=x_center. Put those in a list, count them or otherwise make note of. When done with two dimensional problem, repeat with varying z and using a reduced radius R(z) = sqrt(R^2-z^2) in place of R, until z=R.
If the sphere center is indeed located on a grid point, you know that every grid point inside or outside the right half of the sphere has a mirror partner on the left side, and likewise top/bottom, so you can do half the counting/listing per dimension. You can also save time running Bresenham only to the 45 degree line, because any x,y point relative to the center has a partner y,x. If the sphere can be anywhere, you will have to compute results for each octant.
No matter how efficiently you calculate an individual cell being inside or outside the sphere, your algorithm will always be O(radius^3) because you have to mark that many cells. DarenW's suggestion of the midpoint (aka Bresenham) circle algorithm could give a constant factor speedup, as could simply testing for intersection using the squared radius to avoid the sqrt() call.
If you want better than O(r^3) performance, then you may be able to use an octree instead of a flat grid. Each node of the tree could be marked as being entirely inside, entirely outside, or partially inside the sphere. For partially inside nodes, you recurse down the tree until you get to the finest-grained cells. This will still require marking O(r^2 log r) nodes [O(r^2) nodes on the boundary, O(log r) steps through the tree to get to each of them], so it might not be worth the trouble in your application.
Related
I am trying to find or search for a method that quickly finds all cubes of size L that would be contained by a viewing frustum. Maybe even using cuda.
I have made a DDA traversal for raycasting, which is like a 1D case to me and simple, as I only move along the line at a known distance.
My instinct was to create a bounding box of the frustum, and subdivide this space into a spatial grid of size L cubes. Then test each cell's center of the grid for being inside the frustum. Considering the frustum is a pyramid, it seems that about half the cells would be occupied by a bounding box and I feel that this method is just doing too much work. It will surely work though, I am hoping for a less naive or faster geometric approach.
Perhaps ray cast the left wall first, then right wall second and then line cast in between these? So in a nutshell, looking for the R3 version of something like a DDA traversal.
The fastest way to detect if a vertex resides within a frustum is dot product. The frustum consists of 4 planes, that is top, bottom, left, right, and two z values, front and back clipping. For each vertex check two things: First, is it outside front or back panel? And if not, is it inside the four planes?
To check if a vertex is outside front or back panel you check vertex.Z against your frustrum:
isInsideZ = vertex.Z >= frustrum.Zmin && vertex.Z <= frustrum.Zmax;
To check if it's inside the four frustrum 'walls' you need to compute the cross vectors for them, oriented towards the frustrum's center. Then check the dot product of each cross vector and the position vector to your vertex relative to the respective plane. You obtain this position vector by subtracting some arbitrary point on the plane from the vertex you test. Should the dot product be positive, the vertex is above that plane.
isAbove[i] = Vector3D.Dot(cross[i], vertex - planeloc[i]) > 0;
Where planeloc[i] is any point located on the respective plane i.
The vertex is inside the frustrum if all conditions are met:
isInside = isInsideZ && isAbove[0] && isAbove[1] && isAbove[2] && isAbove[3];
This sounds a bit awkward to handle, but a lot of things can be done outside the grinding loop, such as computing the cross products, i.e. frustrum plane normals, or the plane location vectors. For example, if a plane is spanned by (1,0,0), (1,1,0) then (1,0,0) already represents a point located on that plane.
Normal marching cubes finds 12 edges per cube, but you can do 3 edges per cube, save the edges inside an array, and then go through the cubes again, referencing the edges from the cubes adjacent rather than calculating them.
The process to reference adjacent cubes isn't clearly discussed on the Internet so anyone using marching cubes would be welcome to help find the details of the solution. do you know an implementation already?
here is a picture showing the 3 edges in yellow that you need for each cube, instead of 12.
EDIT- I just found this solution, although it's just a part of it:
Imagine 3 edges coming from the corner of the cube with lowerest coordinates. Then all other edges just belong to other cubes. If our cube has coordinates (x,y,z), the neiboring cubes have coordinates (x+1,y,z), (x,y+1,z), (x,y,z+1), (x+1,y+1,z), (x+1,y,z+1), (x,y+1,z+1). You can imagine the edge as a vector. Then the corner of the cube have edges (1,0,0), (0,1,0), (0,0,1). The cube with coordinates (x+1,y,z) have edges (0,1,0) and (0,0,1) that belong to our cube. The cube (x+1,y+1,z) has only one edge (0,0,1) that belongs to our cube. So if you store 4 elements for the cube you can access them like that:
edge1 = cube[x][y][z][0];
edge2 = cube[x][y][z][1];
edge3 = cube[x][y][z][2];
edge4 = cube[x+1][y][z][1];
edge5 = cube[x+1][y][z][2];
edge6 = cube[x][y+1][z][0];
edge7 = cube[x][y+1][z][2];
edge8 = cube[x][y][z+1][0];
edge9 = cube[x][y][z+1][1];
edge10 = cube[x+1][y+1][z][2];
edge11 = cube[x+1][y][z+1][1];
edge12 = cube[x][y+1][z+1][0];
Now which points edge7 connect? The answer is (x,y+1,z) and (x,y+1,z)+(0,0,1)=(x,y+1,z+1).
Now which cubes edge7 connect? It is more harder. We see that coordinate z is changes along the edge this means that neibour cube has the same z coordinate. Now all others coordinates change. Where we have +1, the cube has large coordinate. Where we have +0, the cube has smaller coordinates. So the edge connects cubes (x,y,z) and (x-1,y+1,z). Other 2 cubes that has the same edge are (x,y+1,z) and (x-1,y,z).
-=-=-=-=-=-=-=-=-=-=-=--=-=-=-=-=-=-=--=-=
EDIT2-
So I am doing this, and it isn't so simple. I have a loop which simultaneously calculate 8 points, 12 edges, the interpolation of edges, the bit values and a vertex the values for the edges, all in one loop.
so I am doing a new loop previous to it to calculate as much as possible and place it in arrays to used in the complicated loop.
I can recycle the interpolated values of the intersection points along edges, in an array, although I will have to recalculate all the points again in the complicated loop, because the values of the points I used to decide bit numbers that reference values in the vertex table. That confuses me! I thought that once I have the edge intersection values, I could use those directly to get the triangle tables, without having to calculate the points all over again!
in fact no.
anyway, here is another bit of information with someone that already did it, if only it was readable!
http://www.new-npac.org/projects/sv2all/sv2/vtk/patented/vtkImageMarchingCubes.cxx
scroll to this line: Cubes are responsible for edges on their min faces.
A simple way to reduce edge calculations in the way you are suggesting is to compute cubes one axis aligned plane at a time.
If you kept all of the cubes, with their edges, in memory, it would be easy to compute each edge only once and to find adjacent edges by indexing. However, you usually don't want to keep all the cubes in memory at once because of the space requirements.
A solution to this is to compute one plane of cubes at a time. i.e. an axis aligned cross-section, starting from one side and progressing to the opposite side. You then only need to keep at most two full planes of cubes in memory at a time. As you move through each plane you can reference shared edges in the previous plane and previously computed cubes in the current plane. As you move to the next plane you can deallocate the plane you will no longer need.
Edit: This article discusses doing just what I suggest:
http://alphanew.net/index.php?section=articles&site=marchoptim&lang=eng
Funny, because when I implemented my own MCs I came up with similar solution.
When you start working with MCs you treat them as a distinct cubes but if you want to go for high performance you'll need to create entire mesh as a whole, and creating vertex indices etc. is not so easy here. It gets even more interesting when you want to add smooth per-vertex normals :).
To solve this I created a simple index cache mechanism to store vertex indices for each edge.
Then, for each computed edge I have cube position x,y,z and edge index and I do as follows:
For each axis:
if the edge is on '+' side of axis:
replace edge index with its '-' side sibling
increment cube position along axis
This simple operation gives me the correct cube position, and edge index of 0,1,2. Then I compute a total cache index from x,y,z,edgeIndex values with simple bit rotations.
When I have cache index I check if it's bigger than -1. If it is then there was an already computed vertex at this edge and I can reuse it. If it's -1 I need to create a new vertex and store its index in the cache. This way you'll compute each vertex only once, and you can even add a normal value shared between every triangle containing your vertex.
Yes, I think I do it similar to kolenda. I have a struct with 5 ints: (cube)index and 4 vertexindices (A, B, C, D).
for the most inner loop (x), I have just lastXCache and nextXCache. On the 4 edges pointing in the -x direction, i ask if lastXCache.A != -1 and if so, assign the previously calculated value, etc.
In the +x direction I store calculated vertices in nextXCache. when cube is done: lastXCache = nextXCache;
For y and z direction it needs to be a list (unity term for mutable array), next y is next row (so sizex) and next z is the next plane (so sizex * sizey)
only diadvantage is that this way it has to run cube after cube, so serially. But you can calculate different chunks in parallel.
Another way I thought of that could be more parallel would need 2 passes: 1. calculate 3 edges every cube, when 1 is done -> 2. draw the triangles.
Don't really know what is better, but the way it actually works seems to be fast enough. even better with unity jobs. Create one IJob for 1 chunk/mesh.
I've just implemented collision detection using SAT and this article as reference to my implementation. The detection is working as expected but I need to know where both rectangles are colliding.
I need to find the center of the intersection, the black point on the image above (but I don't have the intersection area neither). I've found some articles about this but they all involve avoiding the overlap or some kind of velocity, I don't need this.
The information I've about the rectangles are the four points that represents them, the upper right, upper left, lower right and lower left coordinates. I'm trying to find an algorithm that can give me the intersection of these points.
I just need to put a image on top of it. Like two cars crashed so I put an image on top of the collision center. Any ideas?
There is another way of doing this: finding the center of mass of the collision area by sampling points.
Create the following function:
bool IsPointInsideRectangle(Rectangle r, Point p);
Define a search rectangle as:
TopLeft = (MIN(x), MAX(y))
TopRight = (MAX(x), MAX(y))
LowerLeft = (MIN(x), MIN(y))
LowerRight = (MAX(x), MIN(y))
Where x and y are the coordinates of both rectangles.
You will now define a step for dividing the search area like a mesh. I suggest you use AVG(W,H)/2 where W and H are the width and height of the search area.
Then, you iterate on the mesh points finding for each one if it is inside the collition area:
IsPointInsideRectangle(rectangle1, point) AND IsPointInsideRectangle(rectangle2, point)
Define:
Xi : the ith partition of the mesh in X axis.
CXi: the count of mesh points that are inside the collision area for Xi.
Then:
And you can do the same thing with Y off course. Here is an ilustrative example of this approach:
You need to do the intersection of the boundaries of the boxes using the line to line intersection equation/algorithm.
http://en.wikipedia.org/wiki/Line-line_intersection
Once you have the points that cross you might be ok with the average of those points or the center given a particular direction possibly. The middle is a little vague in the question.
Edit: also in addition to this you need to work out if any of the corners of either of the two rectangles are inside the other (this should be easy enough to work out, even from the intersections). This should be added in with the intersections when calculating the "average" center point.
This one's tricky because irregular polygons have no defined center. Since your polygons are (in the case of rectangles) guaranteed to be convex, you can probably find the corners of the polygon that comprises the collision (which can include corners of the original shapes or intersections of the edges) and average them to get ... something. It will probably be vaguely close to where you would expect the "center" to be, and for regular polygons it would probably match exactly, but whether it would mean anything mathematically is a bit of a different story.
I've been fiddling mathematically and come up with the following, which solves the smoothness problem when points appear and disappear (as can happen when the movement of a hitbox causes a rectangle to become a triangle or vice versa). Without this bit of extra, adding and removing corners will cause the centroid to jump.
Here, take this fooplot.
The plot illustrates 2 rectangles, R and B (for Red and Blue). The intersection sweeps out an area G (for Green). The Unweighted and Weighted Centers (both Purple) are calculated via the following methods:
(0.225, -0.45): Average of corners of G
(0.2077, -0.473): Average of weighted corners of G
A weighted corner of a polygon is defined as the coordinates of the corner, weighted by the sin of the angle of the corner.
This polygon has two 90 degree angles, one 59.03 degree angle, and one 120.96 degree angle. (Both of the non-right angles have the same sine, sin(Ɵ) = 0.8574929...
The coordinates of the weighted center are thus:
( (sin(Ɵ) * (0.3 + 0.6) + 1 - 1) / (2 + 2 * sin(Ɵ)), // x
(sin(Ɵ) * (1.3 - 1.6) + 0 - 1.5) / (2 + 2 * sin(Ɵ)) ) // y
= (0.2077, -0.473)
With the provided example, the difference isn't very noticeable, but if the 4gon were much closer to a 3gon, there would be a significant deviation.
If you don't need to know the actual coordinates of the region, you could make two CALayers whose frames are the rectangles, and use one to mask the other. Then, if you set an image in the one being masked, it will only show up in the area where they overlap.
I have two 2D rectangles, defined as an origin (x,y) a size (height, width) and an angle of rotation (0-360°). I can guarantee that both rectangles are the same size.
I need to calculate the approximate area of intersection of these two rectangles.
The calculation does not need to be exact, although it can be. I will be comparing the result with other areas of intersection to determine the largest area of intersection in a set of rectangles, so it only needs to be accurate relative to other computations of the same algorithm.
I thought about using the area of the bounding box of the intersected region, but I'm having trouble getting the vertices of the intersected region because of all of the different possible cases:
I'm writing this program in Objective-C in the Cocoa framework, for what it's worth, so if anyone knows any shortcuts using NSBezierPath or something you're welcome to suggest that too.
To supplement the other answers, your problem is an instance of line clipping, a topic heavily studied in computer graphics, and for which there are many algorithms available.
If you rotate your coordinate system so that one rectangle has a horizontal edge, then the problem is exactly line clipping from there on.
You could start at the Wikipedia article on the topic, and investigate from there.
A simple algorithm that will give an approximate answer is sampling.
Divide one of your rectangles up into grids of small squares. For each intersection point, check if that point is inside the other rectangle. The number of points that lie inside the other rectangle will be a fairly good approximation to the area of the overlapping region. Increasing the density of points will increase the accuracy of the calculation, at the cost of performance.
In any case, computing the exact intersection polygon of two convex polygons is an easy task, since any convex polygon can be seen as an intersection of half-planes. "Sequential cutting" does the job.
Choose one rectangle (any) as the cutting rectangle. Iterate through the sides of the cutting rectangle, one by one. Cut the second rectangle by the line that contains the current side of the cutting rectangle and discard everything that lies in the "outer" half-plane.
Once you finish iterating through all cutting sides, what remains of the other rectangle is the result.
You can actually compute the exact area.
Make one polygon out of the two rectangles. See this question (especially this answer), or use the gpc library.
Find the area of this polygon. See here.
The shared area is
area of rectangle 1 + area of rectangle 2 - area of aggregated polygon
Take each line segment of each rectangle and see if they intersect. There will be several possibilities:
If none intersect - shared area is zero - unless all points of one are inside the other. In that case the shared area is the area of the smaller one.
a If two consecutive edges of one rectactangle intersect with a single edge of another rectangle, this forms a triangle. Compute its area.
b. If the edges are not consequtive, this forms a quadrilateral. Compute a line from two opposite corners of the quadrilateral, this makes two triangles. Compute the area of each and sum.
If two edges of one intersect with two edges of another, then you will have a quadrilateral. Compute as in 2b.
If each edge of one intersects with each edge of the other, you will have an octagon. Break it up into triangles ( e.g. draw a ray from one vertex to each other vertex to make 4 triangles )
#edit: I have a more general solution.
Check the special case in 1.
Then start with any intersecting vertex, and follow the edges from there to any other intersection point until you are back to the first intersecting vertex. This forms a convex polygon. draw a ray from the first vertex to each opposite vetex ( e.g. skip the vertex to the left and right. ) This will divide it into a bunch of triangles. compute the area for each and sum.
A brute-force-ish way:
take all points from the set of [corners of
rectangles] + [points of intersection of edges]
remove the points that are not inside or on the edge of both rectangles.
Now You have corners of intersection. Note that the intersection is convex.
sort the remaining points by angle between arbitrary point from the set, arbitrary other point, and the given point.
Now You have the points of intersection in order.
calculate area the usual way (by cross product)
.
I have a 3D polygon mesh and a corresponding 2D polygon mesh (actually from a UV map) which I'm using to map the geometry onto a 2D plane. Given a point on the plane, how can I efficiently find the polygon on which it's resting in order to map that 2D point back into 3D?
The best approach I can think of is to store the polygons in a 2D interval tree, and use that to get candidate polygons. Is there a simpler approach?
To clarify, this is not for a shader. I'm actually taking a 2D physical simulation and rendering it wrapped around a 3D mesh. For drawing each object, I need to figure out what point in 3D corresponds to its real 2D position.*
One approach I've seen for triangle meshes goes as follows: choose a triangle, and imagine that each of the sides defines a half space. For a given edge, the half space boundary is the line containing the edge, and the half space does not contain the triangle. Choose an edge whose corresponding half space contains your target point. Then select the triangle on the other side of edge, and repeat the process.
Using this method, you will eventually end up at the triangle that contains your target point.
This method is arguable simpler than implementing a 2D interval tree, although the search is less efficient (if n is the number of triangles, it is O(√n) rather than O(log n). Also, it should work for a polygon mesh, as long as the polygons are convex.
So, if I were trying to just get the thing implemented, I'd probably start with a global search of all triangles - compute the barycentric coordinates of that 2d point for each triangle, find the triangle where the barycentric coordinates are all positive, and then use those to map to 3d (multiply the stu position by the 3d points). I'd do this first, and only if it's not fast enough would I try something more complex.
If it's possible to iterate by triangle rather than by 2d points, then the barycentric method would probably be fast enough. But it seems like you've got a bunch of 2d points at arbitrary positions that need to be mapped, and the points change position from frame to frame?
If you've got this kind of situation, you could probably get a big speedup by implementing a local update per frame. Each 2d point would remember which triangle it was within. Set that as the current triangle. Test if the new position is within the current triangle. If not, then you want to walk the mesh to the adjacent triangle which is closest to the target 2d point. Each edge-adjacent triangle is composed of the two common points on the edge, plus another point. Find which edge-adjacent triangle's other point is closest to the target, and set that as current. Then iterate - seems like it should find it pretty quickly? You could also cache a max size for each triangle, so if the point has moved a lot you can just iterate to the next neighbor without doing the barycentric computation (the max size would need to be the distance such that if you are farther than that distance from any triangle point there is no chance you're inside the triangle. This is the length of the largest edge).
But as you mention in your comments, you can run into problems with meshes that have concavities, holes, or separate connected components, where you may fall into a local minimum. There are a couple of ways to deal with this. I think the simplest is to keep a list of all visited triangles (maybe as a flag on the triangle, vector< bool > or set< triangle index >) and refuse to revisit a triangle. If you find that you've visited all the neighbors of your current triangle, then fall back to a global search. Such failures are likely to be uncommon, so it shouldn't hurt your performance too much.
This kind of per-frame updating can be very fast, and might even be a decent approach for computing the initial containing triangles - just choose a random triangle and walk from there (changes from checking all n triangles to only those that are in roughly a straight line to the target). If it's not fast enough, what you could do is keep a k-d tree (or something similar) of the 2d mesh points as well as a single touching triangle index for each mesh point. To seed the iteration, find the closest point to the target 2d point in the k-d tree, set the adjacent triangle to be current, and then iterate.