Distorted Geometry being rendered in directx11 - winapi

I have a problem with rendering 3D on orthographic projection.
i have the depth stencil enabled but on rendering, it produces weird cuts
in between geometry.
I have tried two different depth stencil states, one with depth disabled (for 2D)
and one with depth enabled (for 3D).The 3d one gives weird results.
So how to properly render in 3D in orthographic projection?
Here is an image of the problem:

Well after debugging for a long time i found that the problem lied in culling, i had culling disabled, setting the cull mode to D3D11_CULL_BACK made things work the way they should have.

Related

Barycentric wireframes with full disclosure of back faces

I've implemented a barycentric coordinates wireframe shader something like this, and in general it is working nicely.
But like Florian Boesch's WebGL demo, some of the wire faces on the far side of the mesh are obscured (maybe has to do with the order in which the GPU is constructing faces).
I've set the following in the hopes that they would clear things up:
glCullFace(GL_NONE);
glPolygonMode(GL_FRONT_AND_BACK, GL_FILL);
...but no go so far. Is this possible in OpenGL ES 2.0?
I had forgotten to discard on a transparent output, so the depth buffer was being written in spite of the apparently transparent geometry, thus the mesh was self-obscuring due to depth tests failing.
This would be the problem in Florian's demo too, though it may be that he explicitly avoids discard for mobile performance reasons.

Alpha Blending and face sorting using OpenGL and GLSL

I'm writing a little 3D engine. I've just added the alpha blending functionality in my program and I wonder one thing: do I have to sort all the primitives compared with the camera?)
Let's take a simple example : I have a scene composed by 1 skybox and 1 tree with alpha blended leafs!
Here's a screenshot of a such scene:
Until here all seems to be correct concerning the alpha blending of the leafs relative to each others.
But if we get closer...
... we can see there is a little trouble on the top right of the image (the area around the leaf forms a quad).
I think this bug comes from the fact these two quads (primitives) should have been rendered later than the ones in back.
What do you think about my supposition ?
PS: I want to precise all the geometry concerning the leafs is rendered in just one draw call.
But if I'm right it would means when I need to render an alpha blended mesh like this tree I need update my VBO each time my camera is moving by sorting all the primitives (triangles or quads) from the camera's point of view. So the primitives in back should be rendered in first...
What do you think of my idea?

Does the GPU process invisible things?

I'm making a game in Unity 5, it's minecraft-like. For the world rendering I don't know if I should destroy cubes that I don't see or make them invisible.
My idea was to destroy them, but creating them each time they become visible would take too much processor power so I'm searching alternatives, is making them invisible a viable solution?
I'll be loading a ton of cubes at the same time, for those unfamiliar with minecraft, here is a screenshot so that you get the idea.
That is just a part of what is rendered at the same time in a tipical session.
Unity, like all graphics engines, can cause the GPU to process geometry that would not be visible on screen. The processes that try to limit this are culling and depth testing:
Frustum culling - prevents objects fully outside of the cameras viewing area (frustum) to be rendered. The viewing frustum is defined by the near and far clipping planes and the four planes connecting near and far on each side. This is always on in Unity and is defined by your cameras settings. Excluded objects will not be sent to the GPU.
Occlusion culling - prevents objects that are within the cameras view frustum but completely occluded by other objects from being rendered. This is not on by default. For information on how to enable and configure see occlusion culling in the Unity manual. Occluded objects will not be sent to the GPU.
Back face culling - prevents polygons with normals facing away from the camera from being rendered. This occurs at the shader level so the geometry IS processed by the GPU. Most shaders do cull back faces. See the Cull setting in the Unity shader docs.
Z-culling/depth testing - prevents polygons that won't be seen, due to being further away from the camera than opaque geometry that has already been rendered this frame, from being rendered. Only fully opaque (no transparency) polygons can cause this. Depth testing is also done in the shader and therefore geometry IS processed by the GPU. This process can be controlled by the ZWrite and ZTest settings described in the Unity shader docs.
On a related note, if you are using so many geometrically identical blocks make sure you are using a prefab. This allows Unity to reuse the same set of triangles rather than having 2 x 6 x thousands in your scene, thereby reducing GPU memory load.
A middle ground in between rendering the object as invisible or destroying it is to keep the C++ object but detach it from the scene graph.
This will give you all the rendering speed benefits of destroying it, but when it comes time to put it back you won't need to pay for recreation, just reattach it at the right place in the graph.

Drawing transparent sprites in a 3D world in OpenGL(LWJGL)

I need to draw transparent 2D sprites in a 3D world. I tried rendering a QUAD, texturing it(using slick_util) and rotating it to face the camera, but when there are many of them the transparency doesn't really work. The sprite closest to the camera will block the ones behind it if it's rendered before them.
I think it's because OpenGL only draws the object that is closest to the viewer without checking the alpha value.
This could be fixed by sorting them from furthest away to closest but I don't know how to do that
and wouldn't I have to use math.sqrt to get the distance? (I've heard it's slow)
I wonder if there's an easy way of getting transparency in 3D to work correctly. (Enabling something in OpenGL e.g)
Disable depth testing and render transparent geometry back to front.
Or switch to additive blending and hope that looks OK.
Or use depth peeling.

Drawing Outline with OpenGL ES

Every technique that I've found or tried to render outline in OpenGL uses some function that is not avaliable on OpenGL ES...
Actually what I could do is set depthMask to false, draw the object as a 3 pixels wide line wireframe, reenable the depthMask and then drawing my object. It doesnt work for me because it outline only the external parts of my object, not the internals.
The following image shows two outlines, the left one is a correct outline, the right one is what I got.
So, can someone direct me to a technique that doesn't is avaliable on OpenGL ES?
Haven't done one of these for a while, but I think you're almost there! What I would recommend is this:
Keep depthMask enabled, but flip your backface culling to only render the "inside" of the object.
Draw the mesh with that shader that pushes all the verts out along their normals slightly and as a solid color (your outline color, probably black). Make sure that you're drawing solid triangles and not just GL_LINES.
Flip the backface culling back to normal again and re-render the mesh like usual.
The result is that the outlines will only be visible around the points on your mesh where the triangles start to turn away from the camera. This gives you some nice, simple outlines around things like noses, chins, lips, and other internal details.

Resources