How to apply depth map gradually instead of binary? - opengl-es

I'm working on a mobile app that renders virtual objects on a real environment (AR), kind of like Pokemon Go. The background of every frame is the camera image, passed to the fragment shader. Along with the camera image, a procedurally computed depth image is passed to the fragment shader as a texture.
I would like to use the depth texture as a depth map for the virtual objects. Currently, this works, but doesn't look good for the following reason. The comparison between the depth of the object and the depth from the depth texture is binary, resulting in visible edges of occlusion on the virtual object. I would like the occlusion to gradually fade in, instead of having a hard edge. How can I achieve that?
The current comparison in the fragment shader looks like this:
float visibility = clamp(0.5 * (depth_mm - asset_depth_mm) + 0.5, 0.0, 1.0);
gl_FragColor.a = visibility * object_color.a;
How to change this so that the regions where the object becomes occluded look gradually fading, instead of an instant change in alpha?

I would change:
float visibility = clamp(0.5 * (depth_mm - asset_depth_mm) + 0.5, 0.0, 1.0);
to
float visibility = clamp(0.001 * (depth_mm - asset_depth_mm) + 0.5, 0.0, 1.0);
(tweaking the valid of that 0.001 multiplier to whatever looks good)

Related

Three.js render depth from texture

Is it possible to somehow render to depth buffer from pre-rendered texture?
I am pre-rendering scene like original resident evil games and I would like to apply both pre-rendered depth and color texture to screen.
I previously used technique to make simpler proxy scene for depth but I am wondering if there is a way to use precise pre rendered depth texture instead.
three.js provides a DepthTexture class which can be used to save the depth of a rendered scene into a texture. Typical use cases for such a texture are post processing effects like Depth-of-Field or SSAO.
If you bind a depth texture to a shader, you can sample it like any other texture. However, the sampled depth value is sometimes converted to different representations for further processing. For instance you could compute the viewZ value (which is the z-distance between the rendered fragment and the camera) or convert between perspective and orthographic depth and vice versa. three.js provides helper functions for such tasks.
The official depth texture example uses these helper functions in order to visualize the scene's depth texture. The important function is:
float readDepth( sampler2D depthSampler, vec2 coord ) {
float fragCoordZ = texture2D( depthSampler, coord ).x;
float viewZ = perspectiveDepthToViewZ( fragCoordZ, cameraNear, cameraFar );
return viewZToOrthographicDepth( viewZ, cameraNear, cameraFar );
}
In the example, the resulting depth value is used to compute the final color of the fragment.

Read vertex positions as pixels in Three.js

In a scenario where vertices are displaced in the vertex shader, how to retrieve their transformed positions in WebGL / Three.js?
Other questions here suggest to write the positions to a texture and then read the pixels, but the resulting value don't seem to be correct.
In the example below the position is passed to the fragment shader without any transformations:
// vertex shader
varying vec4 vOut;
void main() {
gl_Position = vec4(position, 1.0);
vOut = vec4(position, 1.0);
}
// fragment shader
varying vec4 vOut;
void main() {
gl_FragColor = vOut;
}
Then reading the output texture, I would expect pixel[0].r to be identical to positions[0].x, but that is not the case.
Here is a jsfiddle showing the problem:
https://jsfiddle.net/brunoimbrizi/m0z8v25d/2/
What am I missing?
Solved. Quite a few things were wrong with the jsfiddle mentioned in the question.
width * height should be equal to the vertex count. A PlaneBufferGeometry with 4 by 4 segments results in 25 vertices. 3 by 3 results in 16. Always (w + 1) * (h + 1).
The positions in the vertex shader need a nudge of 1.0 / width.
The vertex shader needs to know about width and height, they can be passed in as uniforms.
Each vertex needs an attribute with its index so it can be correctly mapped.
Each position should be one pixel in the resulting texture.
The resulting texture should be drawn as gl.POINTS with gl_PointSize = 1.0.
Working jsfiddle: https://jsfiddle.net/brunoimbrizi/m0z8v25d/13/
You're not writing the vertices out correctly.
https://jsfiddle.net/ogawzpxL/
First off you're clipping the geometry, so your vertices actually end outside the view, and you see the middle of the quad without any vertices.
You can use the uv attribute to render the entire quad in the view.
gl_Position = vec4( uv * 2. - 1. , 0. ,1.);
Everything in the buffer represents some point on the quad. What seems to be tricky is when you render, the pixel will sample right next to your vertex. In the fiddle i've applied an offset to the world space thing by how much it would be in pixel space, and it didn't really work.
The reason why it seems to work with points is that this is all probably wrong :) If you want to transform only the vertices, then you need to store them properly in the texture. You can use points for this, but ideally they wouldn't be spaced out so much. In your scenario, they would fill the first couple of rows of the texture (since it's much larger than it could be).
You might start running into problems as soon as you try to apply this to something other than PlaneGeometry. In which case this problem has to be broken down.

Set depth texture for Z-testing in OpenGL ES 2.0 or 3.0

Having a 16-bit uint texture in my C++ code, I would like to use it for z-testing in an OpenGL ES 3.0 app. How can I achieve this?
To give some context, I am making an AR app where virtual objects can be occluded by real objects. The depth texture of real environment is generated, but I can't figure out how to apply it.
In my app, I first use glTexImage2D to render backdrop image from the camera feed, then I draw some virtual objects. I would like the objects to be transparent based on a depth texture. Ideally, the occlusion testing needs to be not binary, but gradual, so that I can alpha blend the objects with background near the occlusion edges.
I can pass and read the depth texture in the fragment shader, but not sure how to use it for z-testing instead of rendering.
Lets assume you have a depth texture uniform sampler2D u_depthmap and the internal format of the depth texture is a floating point format.
To read the texel from the texture, where the current fragment is on, you have to know the size of the viewport (uniform vec2 u_vieport_size). gl_FragCoord contains the window relative coordinate (x, y, z, 1/w) values for the fragment. So the texture coordinate for the depth map is calcualted by:
vec2 map_uv = gl_FragCoord.xy / u_vieport_size;
The depth from the depth texture u_depthmap is given in range [0.0, 1.0], because of the internal floating point format. The depth of the fragment is contained in the gl_FragCoord.z, in range [0.0, 1.0], too.
That means that the depth of the map and the depth of the fragment can be calculated as follows:
uniform sampler2D u_depthmap;
uniform vec2 u_vieport_size;
void mian()
{
vec2 map_uv = gl_FragCoord.xy / u_vieport_size;
float map_depth = texture(u_depthmap, map_uv).x;
float frag_depth = gl_FragCoord.z;
.....
}
Note, map_depth and frag_depth are both in the range [0.0, 1.0]. If the were generated both with the same projection (especially the same near and far plane), then they are comparable. This means you have to ensure that the shader generates the same depth values as the ones in the depth map, for the same point in the world. If this is not the case, then you have to linearize the depth values and you have to calculate the view space Z-coordinate.

Geometry Shader Quad Post Processing

Using directx 11, I'm working on a graphics effect system that uses a geometry shader to build quads in world space. These quads then use a fragment shader in which the main texture is the rendered scene texture. Effectively producing post process effects on qorld space quads. The simplest of which is a tint effect.
The vertex shader only passes the data through to the geometry shader.
The geometry shader calculates extra vertices based on a normal. Using cross product, I find the x and z axis and append the tri-stream with 4 new verts in each diagonal direction from the original position (generating a quad from the given position and size).
The pixel shader (tint effect) simply multiplies the scene texture colour with the colour variable set.
The quad generates and displays correctly on screen. However;
The problem that I am facing is the mapping of the uv coordinates fails to align with the image on the back buffer. That is, when using the tint shader with half alpha as the given colour you can see the image displayed on the quad does not overlay the image on the back buffer perfectly, unless the quad facing towards the camera. The closer the quad normal matches the cameras y axis, the more the image is skewed.
I am currently using the formula below to calculate the uv coordinates:
float2 uv = vert0.position.xy / vert0.position.w;
vert0.uv.x = uv.x * 0.5f + 0.5f;
vert0.uv.y = -uv.y * 0.5f + 0.5f;
I have also used the formula below, which resulted (IMO) in the uv's not taking perspective into concideration.
float2 uv = vert0.position.xy / SourceTextureResolution;
vert0.uv.x = uv.x * ASPECT_RATIO + 0.5f;
vert0.uv.y = -uv.y + 0.5f;
Question:
How can I obtain screen space uv coordinates based on a vertex position generated in the geometry shader?
If you would like me to elaborate on any points please ask and i will try my best :)
Thanks in advance.

WebGL / GPU Skinning / Skeletal Animation

I am frustratingly close to working skeletal animation in WebGL.
Background
I have a model with a zombie walk animation that I got for free. I downloaded the entire thing in a single Collada file. I wrote a parser to get all the vertices, normals, joints influence indices/weights, and joint matrices. I am able to render the character in its bind pose by doing
joints[i].skinning_matrix = MatrixMultiply(joints[i].inverse_bind_pose_matrix, joints[i].world_matrix);
where a joint's world matrix is the joint's bind_pose_matrix multiplied by its parent's world matrix. I got the inverse_bind_pose_matrix by doing:
joints[i].inverse_bind_pose_matrix = MatrixInvert(joints[i].world_matrix);
So really, rendering the character in its bind pose is just passing Identity matrices to the shader, so maybe I'm not even doing that part right at all. However, the inverse bind pose matrices that I calculate are nearly identical to the ones supplied by the Collada file, so I'm pretty sure those are good.
Here's my model in its bind pose:
Problem
Once I go ahead and try to calculate the skinning matrix using a single frame of the animation (I chose frame 10 at random), it still resembles a man but something is definitely wrong.
I'm using the same inverse_bind_pose_matrix that I calculated in the first place. I'm using a new world matrix, calculated instead by multiplying each joint's keyframe/animation matrix by its parent's new world matrix.
I'm not doing any transposing anywhere in my entire codebase, though I think I've tried transposing pretty much any combination of matrices to no avail.
Here is my model in his animation frame 10 pose:
Some Relevant Code
vertex shader:
attribute float aBoneIndex1;
// up to aBoneIndex5
attribute float aBoneWeight1;
// up to aBoneWeight5
uniform mat4 uBoneMatrices[52];
void main(void) {
vec4 vertex = vec4(0.0, 0.0, 0.0, 0.0);
vertex += aBoneWeight1 * vec4(uBoneMatrices[int(aBoneIndex1)] * aPosition);
vertex += aBoneWeight2 * vec4(uBoneMatrices[int(aBoneIndex2)] * aPosition);
vertex += aBoneWeight3 * vec4(uBoneMatrices[int(aBoneIndex3)] * aPosition);
vertex += aBoneWeight4 * vec4(uBoneMatrices[int(aBoneIndex4)] * aPosition);
vertex += aBoneWeight5 * vec4(uBoneMatrices[int(aBoneIndex5)] * aPosition);
// normal/lighting part
// the "/ 90.0" simply scales the model down, problem persists without it.
gl_Position = uPMatrix * uMVMatrix * vec4(vertex.xyz / 90.0, 1.0);
}
You can see my parser in its entirety (if you really really want to...) on GitHub and you can see the model live here.

Resources