Does the GPU process invisible things? - performance

I'm making a game in Unity 5, it's minecraft-like. For the world rendering I don't know if I should destroy cubes that I don't see or make them invisible.
My idea was to destroy them, but creating them each time they become visible would take too much processor power so I'm searching alternatives, is making them invisible a viable solution?
I'll be loading a ton of cubes at the same time, for those unfamiliar with minecraft, here is a screenshot so that you get the idea.
That is just a part of what is rendered at the same time in a tipical session.

Unity, like all graphics engines, can cause the GPU to process geometry that would not be visible on screen. The processes that try to limit this are culling and depth testing:
Frustum culling - prevents objects fully outside of the cameras viewing area (frustum) to be rendered. The viewing frustum is defined by the near and far clipping planes and the four planes connecting near and far on each side. This is always on in Unity and is defined by your cameras settings. Excluded objects will not be sent to the GPU.
Occlusion culling - prevents objects that are within the cameras view frustum but completely occluded by other objects from being rendered. This is not on by default. For information on how to enable and configure see occlusion culling in the Unity manual. Occluded objects will not be sent to the GPU.
Back face culling - prevents polygons with normals facing away from the camera from being rendered. This occurs at the shader level so the geometry IS processed by the GPU. Most shaders do cull back faces. See the Cull setting in the Unity shader docs.
Z-culling/depth testing - prevents polygons that won't be seen, due to being further away from the camera than opaque geometry that has already been rendered this frame, from being rendered. Only fully opaque (no transparency) polygons can cause this. Depth testing is also done in the shader and therefore geometry IS processed by the GPU. This process can be controlled by the ZWrite and ZTest settings described in the Unity shader docs.
On a related note, if you are using so many geometrically identical blocks make sure you are using a prefab. This allows Unity to reuse the same set of triangles rather than having 2 x 6 x thousands in your scene, thereby reducing GPU memory load.

A middle ground in between rendering the object as invisible or destroying it is to keep the C++ object but detach it from the scene graph.
This will give you all the rendering speed benefits of destroying it, but when it comes time to put it back you won't need to pay for recreation, just reattach it at the right place in the graph.

Related

How to perform all slow THREE.Texture computation at load instead of at runtime?

I'm setting a Scene containing many textured objects with high quality textures (resolution 1024x1024 and 2048x2048) and where each material has multiple maps (base color, normal, occlusion, roughness). To speed up the load and the rendering I'm already reusing the THREE.Texture objects when possible.
But just after load, when I rotate the viewpoint there is a clear slow down when new objects become visible. After all the textured objects have been rendered at least once the viewpoint rotation works smoothly as expected.
Note that if two objects share the same textures, only the first one will slow down the rendering when becoming visible.
Is there a way to compute everything at load in order to avoid this slow down when navigating into the scene?
I already tried to set THREE.Object3D.frustumCulling to true and call THREE.WebGLRenderer.compile() at load but I don't see any difference and the slow down remains.
Finally I found that the issue was simply a dummy typo..
the field is called Objects3D.frustumCulled and not Objects3D.frustumCulling.

Can points or meshes be drawn at infinite distance?

I'm interested in drawing a stardome in THREE.js using either mesh points or a particle system.
I don't want the camera to be able to move any closer to any part of the stardome, since the stars are effectively at infinite distance.
I can think of a couple of ways to do this:
A very large mesh (or very large point/particle distances)
Camera and stardome have their movement exactly linked.
Is there any way to specify a mesh, point, or particle system is automaticaly rendered at infinite distance so it is always drawn behind any foreground objects?
I haven't used three.js, but my guess is no. OpenGL camera's need a "near clipping plane" and "far clipping plane", which effectively denote the minimum and maximum distance that it'll render things in. If you've played video games where you move too close to a wall and start to see through it, or see things in the distance suddenly vanish as you move away, those were probably the clipping planes at work.
The workaround is usually one of 2 ways:
1) Set the far clipping plane distance as high as it'll let you go. I don't know what data type three.js would use for this, but my guess is a 32-bit float.
2) Render it in "layers". Render all the stars first before anything else in the scene.
Option 2 is the one I usually use.
Even if you used option 1, you would still synchronize the position of the camera and skybox.
If you do not depth cull, draw the skybox first and match its position, but not rotation, to the camera.
Also disable lighting on the skybox. Instead, bake an ambience directly into its texture.
You're don't want things infinitely away, you just want them not to move with respect to the viewer and to not appear in front of things. The best way to do that is to prevent the viewer from getting closer to them which produces the illusion of the object being far away. The second thing is to modify your depth culling function so that the skybox is always considered further away than whatever you are currently drawing.
If you create a very large mesh object, you'll have to set your camera's far plane large enough to include the mesh which means you'll end up drawing things that you really do want to cull.

Distorted Geometry being rendered in directx11

I have a problem with rendering 3D on orthographic projection.
i have the depth stencil enabled but on rendering, it produces weird cuts
in between geometry.
I have tried two different depth stencil states, one with depth disabled (for 2D)
and one with depth enabled (for 3D).The 3d one gives weird results.
So how to properly render in 3D in orthographic projection?
Here is an image of the problem:
Well after debugging for a long time i found that the problem lied in culling, i had culling disabled, setting the cull mode to D3D11_CULL_BACK made things work the way they should have.

Hooking into hidden surface removal/backface culling to swap textures in WebGL?

I want to swap textures on the faces of a rotating cube whenever they face away from the camera; detecting these faces is not entirely equivalent to, but very similar to hidden surface removal. Is it possible to hook into the builtin backface culling function/depth buffer to determine this, particularly if I want to extend this to more complex polygons?
There is a simple solution using dot products described here but I am wondering if it's possible to hook into existing functions.
I don't think you can hook into internal WebGL processing but even if it's possible it wouldn't be the best way for you. Doing this would need your GPU to switch current texture on triangle-by-triangle basis, messing up with internal caches, etc. and in general - our GPUs don't like if commands.
You can however render your mesh with first texture and face culling enabled, then set your cull face direction to opposite and render your mesh with 2nd texture. This way you'll get different texture on front- and back-facing triangles.
As I think of it - if you have a correct mesh that uses face culling you shouldn't get any difference, because you'll never see a back-face, so the only ways it could be useful is for transparent meshes or not-so-correctly closed ones, like billboards. If you want to use this approach with transparency then you'll need to carefully pick the correct rendering order.

Is drawing outside the viewport in OpenGL expensive?

I have several thousand quads to draw, some of which might fall entirely outside the viewport. I could write code which will detect which quads fall wholly outside viewport and ask OpenGL to draw only those which will be at least partially visible. Alternatively, I could simply have OpenGL draw all of the quads, regardless of whether they intersect with the viewport.
I don't have enough experience with OpenGL to know if one of these is obviously better (or if OpenGL offers some quick viewport intersection test I can use). Are draws outside the viewport close to being no-ops, or are they expensive enough that I should I try to avoid them?
It depends on your circumstances.
Drawing is best done in batches, preferably batches that are static in structure (ie: each batch is drawn in its entirety). So you shouldn't be culling down at the quad level. But doing some culling of large groups of quads is not unwelcome.
The primary performance that you'll lose is vertex transform (aka: your vertex shader). A vertex shader has to be run on every vertex you provide, regardless of anything else. However, hardware will discard triangles that are trivially outside of the viewport, so you won't soak up any fillrate or other performance.
However, that doesn't mean that it's OK if your vertex T&L is cheap. Rendering large blocks of triangles that aren't visible may very well stall the rasterizer, because all of the triangles are being culled. That is, if you draw a lot of stuff that gets culled by being off screen, the fillrate that you might have used on actually visible triangles may be lost.
So it's not a good idea to just hurl geometry at the GPU willy-nilly.
In any case, if you're doing 2D rendering, coarse culling of discrete groups of quads is really all you need. You could divide your tilemap into screen-sized portions, and you draw up to 4 of these based on the position of the camera.

Resources