What would be an efficient approach for constraining an object so that it's always at least partially intersecting the view frustum?
The use case is that when viewing a model I want to clamp camera panning, as well as model translation, so that the view frustum is never looking at empty space.
One approach I tried was to wrap the model objects in bounding volumes, then enforce the constraint when those fall outside the frustum. I've tried bounding boxes so far, but am considering using a minimal convex hull.
The problem is that that when you zoom in close enough, it's still possible to be looking at empty space within the boundary, as shown in the attached diagram.
This is for a WebGL application, so needs to be fairly efficient in JavaScript, and also for thousand of vertices.
Ideally you would have a aabb tree of your mesh, and then you can recursively project on camea/screen until you get an intersection ?
http://www.codersnotes.com/algorithms/projected-area-of-an-aabb
edit: it's just frustum culling algo against aabtree does anyway, so looking for optimized solution, is looking for optimized frustum culling things
https://fgiesen.wordpress.com/2010/10/17/view-frustum-culling/
http://www2.in.tu-clausthal.de/~zach/teaching/cg_literatur/vfc_bbox.pdf
As general approximation is possible I would try the point cloud. First create a list of points - either by every Nth mesh vertex or every Nth face centre. With N being for example 10. Once you have this point array all you do is check if any of points is in frustum while updating its orientation. If not then this means that user moved or rotated the camera too much and you need to restore last acceptable orientation.
I know that this may seem quite ridiculous but I think that it is fairly easy to implement and frustum checking of vertex is just couple of multiplications per plane. It will be not perfect though.
You can also make frustum a little smaller to ensure that there is some border around the object.
Related
I am new to the WebGL and shaders world, and I was wondering what the best way for me to paint only the pixels within a path. I have the positions 2d of each point and I would like to fill with a color inside the path.
2D Positions
Fill
Could someone give me a direction? Thanks!
Unlike the canvas 2d API to do this in WebGL requires you to triangulate the path. WebGL only draws points (squares), lines, and triangles. Everything else (circles, paths, 3d models) is up to you to creatively use those 3 primitives.
In your case you need turn your path into a set of triangles. There are tons of algorithms to do that. Each one has tradeoffs, some only handle convex paths, some don't handle holes, some add more points in the middle and some don't. Some are faster than others. There are also libraries that do it like this one for example
It's kind of a big topic arguably too big to go into detail here. Other SO questions about it already have answers.
Once you do have the path turned into triangles then it's pretty straightforward to pass those triangles into WebGL and have them drawn.
Plenty of answers on SO already cover that as well. Examples
Drawing parametric shapes in webGL (without three.js)
Or you might prefer some tutorials
There is a simple triangulation (mesh generation) for your case. First sort all your vertices into CCW order. Then calculate the middle point of all vertices. Then iterate over your sorted vertices, and push a triangle made of the middle point, the point at vertices[index] and the point at vertices[index+1] to the mesh.
I'm implementing a nav mesh pathfinding system and I need to be able to raycast between two points in the mesh and get a list of all edges crossed by the ray. Obviously I'll need to be able to test for individual line intersections, but I expect there to be an efficient way of picking which lines actually need to be checked rather than brute force iterating over every edge in the entire mesh. Does anyone know how I might go about that?
If your mesh is rectangular grid, consider effective method of Woo and Amanatides from article "Fast Voxel Traversal Algorithm..."
Implementation example
I have several 3d models in an OpenGL ES application for iPhone and at some point i want the user to touch the screen and act on them. The problem is to recognize which, among the ones rendered on the screen, has been touched. In order to achieve this I calculated the picking ray as suggested by the OpenGL FAQ, and I now want to detect if it intersects to any model.
I've looked up at the irrlicht source code and found that I can calculate the intersection between the ray and each single model triangle (they do this by calculating if the ray intersects the triangle plane first and then by seeing whether the intersection point falls into the triangle, but there is a more efficient way to do so as stated here).
My question is: do I really need to do all this computation for each single triangle of every model? Isn't there a better way (maybe not so precise) to achieve a similar result?
You are quite right there ARE better ways to go through the tree. One method is to build an octtree around the object. Then if the ray intersects one of the 8 segments you can check which of its 8 children it intersects and so on until you are left with a few triangles to do an intersection test against. Another method is to build a K-d tree.
There are many many ways to efficiently handle this problem. Look up info on ray tracing acceleration structures.
I have a 3d modeling application. Right now I'm drawing the meshes double-sided, but I'd like to switch to single sided when the object is closed.
If the polygonal mesh is closed (no boundary edges/completely periodic), it seems like I should always be able to determine if the object is currently flipped, and automatically correct.
Being flipped means that my normals point into the object instead of out of the object. Being flipped is a result of a mismatch between my winding rules and the current frontface setting, but I compute the normals directly from the geometry, so looking at the normals is a simple way to detect it.
One thing I was thinking was to take the bounding box, find the highest point, and see if its normal points up or down - if it's down, then the object is flipped.
But it seems like this solution might be prone to errors with degenerate geometry, or floating point error, as I'd only be looking at a single point. I guess I could get all 6 axis-aligned extents, but that seems like a slightly better kludge, and not a proper solution.
Is there a robust, simple way to do this? Robust and hard would also work.. :)
This is a robust, but slow way to get there:
Take a corner of a bounding box offset from the centroid (force it to be guaranteed outside your closed polygonal mesh), then create a line segment from that to the center point of any triangle on your mesh.
Measure the angle between that line segment and the normal of the triangle.
Intersect that line segment with each triangle face of your mesh (including the tri you used to generate the segment).
If there are an odd number of intersections, the angle between the normal and the line segment should be <180. If there are an even number, it should be >180.
If there are an even number of intersections, the numbers should be reversed.
This should work for very complex surfaces, but they must be closed, or it breaks down.
"I'm drawing the meshes double-sided"
Why are you doing that? If you're using OpenGL then there is a much better way to go and save yourself all the work. use:
glLightModeli(GL_LIGHT_MODEL_TWO_SIDE, 1);
With this, all the polygons are always two sided.
The only reason why you would want to use one-sided lighting is if you have an open or partially inverted mesh and you want to somehow indicate what part belong to the inside by having them unlighted.
Generally, the problem you're posing is an open problem in geometry processing and AFAIK there is no sure-fire general way that can always determine the orientation. As you suggest, there are heuristics that work almost always.
Another approach is reminiscent of a famous point-in-polygon algorithm: choose a vertex on the mesh and shoot a ray from it in the direction of the normal. If the ray hits an even number of faces then the normal is pointed to the outside, if it is odd then the normal is towards to inside. notice not to count the point of origin as an intersection. This approach will work only if the mesh is a closed manifold and it can reach some edge cases if the ray happens to pass exactly between two polygons so you might want to do it a number of times and take the leading vote.
I am new to culling. On a first glance, it seems that most occlusion culling algorithms are object-level, not examining single meshes, which would be practical for game rendering.
What I am looking for is an algorithm that culls all meshes within a single object that are occluded for a given viewpoint, with high accuracy. It needs to be at least O(n log n), a naive mesh-by-mesh comparison (O(n^2)) is too slow.
I notice that the Blender GUI identifies the occluded meshes for you in real-time, even if you work with large objects of 10,000+ meshes. What algorithm is used there, pray tell?
Blender performs both view frustum culling and occlusion culling, based on the dynamic AABB tree acceleration structures from the Bullet physics library. Occluders and objects are represented by their bounding volume (axis aligned bounding box).
View frustum culling just traverses the AABB tree, given a camera frustum.
The occlusion culling is based on a occlusion buffer, a kind of depth buffer that is initialized using a very simple software-renderer: a basic ray tracer using the dynamic AABB tree acceleration structures. You can choose the resolution of the occlusion buffer to trade accuracy versus efficiency.
See also documentation http://wiki.blender.org/index.php/Dev:Ref/Release_Notes/2.49/Game_Engine#Performance
Blender implementation from the Blender source tree:
blender\source\gameengine\Physics\Bullet\CcdPhysicsEnvironment.cpp method
bool CcdPhysicsEnvironment::cullingTest(PHY_CullingCallback callback, void* userData, PHY__Vector4 *planes, int nplanes, int occlusionRes)
and the Bullet Physics SDK has some C++ sample code in Bullet/Extras/CDTestFramework.
Have you looked into things like Octrees?
The reason why culling is not made on the mesh level is that a mesh is a very dumb renderer level object while a game-object is at the scene level, where all the culling occurs. There are no mesh level culling decision being taken, they are simply a way to batch primitives with similar shader states if their objects needs to be renderered.
If you really need to cull at mesh level, then you might want to create one object per mesh and then create a group object that contains the mesh-objects. Be aware that you might actually lose performance since culling at mesh level is usually not worth it; it might break the flow of data transfers to the hardware.
Usually the first thing you do is see if the meshes have any chance of occluding each other. Often times with a simple bounding primitive (bounding boxes or bounding spheres). If I recall correctly, Octrees will help with this.
I tried the naive approach which was actually sufficiently fast for my app.
For each triangle in mesh, check if occluded by any other triangle in mesh, thus O(n2). (I got high accuracy results from only checking whether the center point on each triangle was occluded or not, though you should at least check the triangle corner vertices, too, if accuracy is important).
On a pentium 4 machine (c++) a binary STL object of ~10,000 triangles took about 1 minute to complete.
The algorithm can be optimized up to ~40 percent by sorting the triangles first, for example by area size or distance from view point (more likely to occlude, so you can skip more checks).
Next optimization step would be to put the triangles in an octree, which should greatly reduce number of occlusion checks.