Frustum culling on gpu - algorithm

I am a 3D computer graphics learner and I want to implement frustum culling to work on the gpu. I've already implemented it with software rendering system. That culling worked with polygon with minimum of 3 vertices and, if it was outside the frustum, added a new point on the frustum inside. Here is algorithm implementation just in case.
And, as I want to implement that algorithm on the gpu, I have several questions with it:
I was thinking to implement frustum culling with geometry shader, but that seems that I won't be able to cull UVs coordinates then, is it Right? Or there is a way to pass, somehow, those UV's coordinates, cull them in geometry shader, and pass them next to the fragment shader?
I thought to implement it in vertex shader, but won't I be able to get all vertices for current triangle? Maybe I just don't quite know a way to corretly get all vertices per face? Wouldn't I just to cull only one by one vertex?
Maybe I just don't know the right way to implement frustum culling on gpu. Can there be a better algorithm to implement frustum culling? Thanks.

I believe what you have implemented is 'clipping'.
'frustum culling' usually refers to a technique where you discard invisible elements as a whole and I believe that is not what you are asking here.
back to clipping,
Graphics API by default will clip polygons in normalized device coordinate space. For example with OpenGL, out side of 2x2x2 cube with center at (0,0,0) will be clipped automatically after the vertex shader.

Related

Is there a way to change the way geometry is drawn in Three.js?

I'm new to WebGL and OpenGL. I've worked with an OpenGL library elsewhere that gives me the options of choosing how my geometry is "drawn". For example I can select triangles, quads, line_loop, points, etc.
My question is: based on my research so far, Three.js removed the option for quads due to rendering issues. Are there any other options on how geometric shapes are drawn?
Here's a graph depicting what I mean: http://www.opentk.com/files/tmp/persistent/opentk/files/GeometricPrimitiveTypes.gif
Polygons and Quads are pretty much useless, because your typical rasterizer (software or GPU) can deal only with convex, coplanar primitives. It's easy to construct a quad or polygon that's concave or not coplanar or both. Hence quads and polygons have been removed from modern OpenGL. Triangles, in any form are safe though, there's no way for a triangle to be concave or not coplanar.

3D algorithm for clamping a model to view frustum

What would be an efficient approach for constraining an object so that it's always at least partially intersecting the view frustum?
The use case is that when viewing a model I want to clamp camera panning, as well as model translation, so that the view frustum is never looking at empty space.
One approach I tried was to wrap the model objects in bounding volumes, then enforce the constraint when those fall outside the frustum. I've tried bounding boxes so far, but am considering using a minimal convex hull.
The problem is that that when you zoom in close enough, it's still possible to be looking at empty space within the boundary, as shown in the attached diagram.
This is for a WebGL application, so needs to be fairly efficient in JavaScript, and also for thousand of vertices.
Ideally you would have a aabb tree of your mesh, and then you can recursively project on camea/screen until you get an intersection ?
http://www.codersnotes.com/algorithms/projected-area-of-an-aabb
edit: it's just frustum culling algo against aabtree does anyway, so looking for optimized solution, is looking for optimized frustum culling things
https://fgiesen.wordpress.com/2010/10/17/view-frustum-culling/
http://www2.in.tu-clausthal.de/~zach/teaching/cg_literatur/vfc_bbox.pdf
As general approximation is possible I would try the point cloud. First create a list of points - either by every Nth mesh vertex or every Nth face centre. With N being for example 10. Once you have this point array all you do is check if any of points is in frustum while updating its orientation. If not then this means that user moved or rotated the camera too much and you need to restore last acceptable orientation.
I know that this may seem quite ridiculous but I think that it is fairly easy to implement and frustum checking of vertex is just couple of multiplications per plane. It will be not perfect though.
You can also make frustum a little smaller to ensure that there is some border around the object.

Making Physijs take vertex shader displacements into account

I'm not sure if the title is the best, but I'll try to explain what my problem is. I'm playing aroung with Three.js and Physijs to make a simple game where the player can walk around on a field. I'm using a plane geometry as the ground and wanted to use a displacement map to change its shape. I previously manipulated the vertices directly (using just some mathematical formulas to displace the vertices) and used a HeightfieldMap for collision detection and this worked great, but now that I implemented displacement mapping through a vertex shader, Physijs treats the ground as a flat object. I'm guessing the manipulation of the vertices in the shader happens after the Physijs simulation, but is there any way to get Physijs to take the vertex displacement from the shader into account when doing collision detection? Alternatively, is there a good way to do displacement mapping without shaders?

WebGL Change Shape Animation

I'm creating a 3D globe with a map on it which is supposed to unravel and fill the screen after a few seconds.
I've managed to create the globe using three.js and webGL, but I'm having trouble finding any information on being able to animate a shape change. Can anyone provide any help? Is it even possible?
(Abstract Algorithm's and Kevin Reid's answers are good, and only one thing is missing: some actual Three.js code.)
You basically need to calculate where each point of the original sphere will be mapped to after it flattens out into a plane. This data is an attribute of the shader: a piece of data attached to each vertex that differs from vertex to vertex of the geometry. Then, to animate the transition from the original position to the end position, in your animation loop you will need to update the amount of time that has passed. This data is a uniform of the shader: a piece of data that remains constant for all vertices during each frame of the animation, but may change from one frame to the next. Finally, there exists a convenient function called "mix" that will linearly interpolate between the original position and the end/goal position of each vertex.
I've written two examples for you: the first just "flattens" a sphere, sending the point (x,y,z) to the point (x,0,z).
http://stemkoski.github.io/Three.js/Shader-Attributes.html
The second example follows Abstract Algorithm's suggestion in the comments: "unwrapping the sphere's vertices back on plane surface, like inverse sphere UV mapping." In this example, we can easily calculate the ending position from the UV coordinates, and so we actually don't need attributes in this case.
http://stemkoski.github.io/Three.js/Sphere-Unwrapping.html
Hope this helps!
In 3D, anything and everything is possible. ;)
Your sphere geometry has it's own vertices, and basically you just need to animate their position, so after animation they are all sitting on one planar surface.
Try creating sphere and plane geometry, with same number of vertices, and animating sphere's vertices with interpolated values of sphere's and plane's original values. That way, on the start you would have sphere shape and in the end, plane shape.
Hope this helps, tell me if you need more directives how to do it.
myGlobe.geometry.vertices[index].position = something_calculated;
// myGlobe is instance of THREE.Mesh and something_calculated would be THREE.Vector3 instance that you can calculate in some manner (sphere-plane interpolation over time)
(Abstract Algorithm's answer is good, but I think one thing needs improvement: namely using vertex shaders.)
You make a set of vertices textured with the map image. Then, design a calculation for interpolating between the sphere shape and the flat shape. It doesn't have to be linear interpolation — for example, one way that might be good is to put the map on a small portion of an sphere of increasing radius until it looks flat (getting it all the way will be tricky).
Then, write that calculation in your vertex shader. The position of each vertex can be computed entirely from the texture coordinates (since that determines where-on-the-map the vertex goes and implies its position) and a uniform variable containing a time value.
Using the vertex shader will be much more efficient than recomputing and re-uploading the coordinates using JavaScript, allowing perfectly smooth animation with plenty of spare resources to do other things as well.
Unfortunately, I'm not familiar enough with Three.js to describe how to do this in detail, but all of the above is straightforward in basic WebGL and should be possible in any decent framework.

Most efficient algorithm for mesh-level, optimal occlusion culling?

I am new to culling. On a first glance, it seems that most occlusion culling algorithms are object-level, not examining single meshes, which would be practical for game rendering.
What I am looking for is an algorithm that culls all meshes within a single object that are occluded for a given viewpoint, with high accuracy. It needs to be at least O(n log n), a naive mesh-by-mesh comparison (O(n^2)) is too slow.
I notice that the Blender GUI identifies the occluded meshes for you in real-time, even if you work with large objects of 10,000+ meshes. What algorithm is used there, pray tell?
Blender performs both view frustum culling and occlusion culling, based on the dynamic AABB tree acceleration structures from the Bullet physics library. Occluders and objects are represented by their bounding volume (axis aligned bounding box).
View frustum culling just traverses the AABB tree, given a camera frustum.
The occlusion culling is based on a occlusion buffer, a kind of depth buffer that is initialized using a very simple software-renderer: a basic ray tracer using the dynamic AABB tree acceleration structures. You can choose the resolution of the occlusion buffer to trade accuracy versus efficiency.
See also documentation http://wiki.blender.org/index.php/Dev:Ref/Release_Notes/2.49/Game_Engine#Performance
Blender implementation from the Blender source tree:
blender\source\gameengine\Physics\Bullet\CcdPhysicsEnvironment.cpp method
bool CcdPhysicsEnvironment::cullingTest(PHY_CullingCallback callback, void* userData, PHY__Vector4 *planes, int nplanes, int occlusionRes)
and the Bullet Physics SDK has some C++ sample code in Bullet/Extras/CDTestFramework.
Have you looked into things like Octrees?
The reason why culling is not made on the mesh level is that a mesh is a very dumb renderer level object while a game-object is at the scene level, where all the culling occurs. There are no mesh level culling decision being taken, they are simply a way to batch primitives with similar shader states if their objects needs to be renderered.
If you really need to cull at mesh level, then you might want to create one object per mesh and then create a group object that contains the mesh-objects. Be aware that you might actually lose performance since culling at mesh level is usually not worth it; it might break the flow of data transfers to the hardware.
Usually the first thing you do is see if the meshes have any chance of occluding each other. Often times with a simple bounding primitive (bounding boxes or bounding spheres). If I recall correctly, Octrees will help with this.
I tried the naive approach which was actually sufficiently fast for my app.
For each triangle in mesh, check if occluded by any other triangle in mesh, thus O(n2). (I got high accuracy results from only checking whether the center point on each triangle was occluded or not, though you should at least check the triangle corner vertices, too, if accuracy is important).
On a pentium 4 machine (c++) a binary STL object of ~10,000 triangles took about 1 minute to complete.
The algorithm can be optimized up to ~40 percent by sorting the triangles first, for example by area size or distance from view point (more likely to occlude, so you can skip more checks).
Next optimization step would be to put the triangles in an octree, which should greatly reduce number of occlusion checks.

Resources