Three.js render depth from texture - three.js

Is it possible to somehow render to depth buffer from pre-rendered texture?
I am pre-rendering scene like original resident evil games and I would like to apply both pre-rendered depth and color texture to screen.
I previously used technique to make simpler proxy scene for depth but I am wondering if there is a way to use precise pre rendered depth texture instead.

three.js provides a DepthTexture class which can be used to save the depth of a rendered scene into a texture. Typical use cases for such a texture are post processing effects like Depth-of-Field or SSAO.
If you bind a depth texture to a shader, you can sample it like any other texture. However, the sampled depth value is sometimes converted to different representations for further processing. For instance you could compute the viewZ value (which is the z-distance between the rendered fragment and the camera) or convert between perspective and orthographic depth and vice versa. three.js provides helper functions for such tasks.
The official depth texture example uses these helper functions in order to visualize the scene's depth texture. The important function is:
float readDepth( sampler2D depthSampler, vec2 coord ) {
float fragCoordZ = texture2D( depthSampler, coord ).x;
float viewZ = perspectiveDepthToViewZ( fragCoordZ, cameraNear, cameraFar );
return viewZToOrthographicDepth( viewZ, cameraNear, cameraFar );
}
In the example, the resulting depth value is used to compute the final color of the fragment.

Related

Set depth texture for Z-testing in OpenGL ES 2.0 or 3.0

Having a 16-bit uint texture in my C++ code, I would like to use it for z-testing in an OpenGL ES 3.0 app. How can I achieve this?
To give some context, I am making an AR app where virtual objects can be occluded by real objects. The depth texture of real environment is generated, but I can't figure out how to apply it.
In my app, I first use glTexImage2D to render backdrop image from the camera feed, then I draw some virtual objects. I would like the objects to be transparent based on a depth texture. Ideally, the occlusion testing needs to be not binary, but gradual, so that I can alpha blend the objects with background near the occlusion edges.
I can pass and read the depth texture in the fragment shader, but not sure how to use it for z-testing instead of rendering.
Lets assume you have a depth texture uniform sampler2D u_depthmap and the internal format of the depth texture is a floating point format.
To read the texel from the texture, where the current fragment is on, you have to know the size of the viewport (uniform vec2 u_vieport_size). gl_FragCoord contains the window relative coordinate (x, y, z, 1/w) values for the fragment. So the texture coordinate for the depth map is calcualted by:
vec2 map_uv = gl_FragCoord.xy / u_vieport_size;
The depth from the depth texture u_depthmap is given in range [0.0, 1.0], because of the internal floating point format. The depth of the fragment is contained in the gl_FragCoord.z, in range [0.0, 1.0], too.
That means that the depth of the map and the depth of the fragment can be calculated as follows:
uniform sampler2D u_depthmap;
uniform vec2 u_vieport_size;
void mian()
{
vec2 map_uv = gl_FragCoord.xy / u_vieport_size;
float map_depth = texture(u_depthmap, map_uv).x;
float frag_depth = gl_FragCoord.z;
.....
}
Note, map_depth and frag_depth are both in the range [0.0, 1.0]. If the were generated both with the same projection (especially the same near and far plane), then they are comparable. This means you have to ensure that the shader generates the same depth values as the ones in the depth map, for the same point in the world. If this is not the case, then you have to linearize the depth values and you have to calculate the view space Z-coordinate.

openGL reverse image texturing logic

I'm about to project image into cylindrical panorama. But first I need to get the pixel (or color from pixel) I'm going to draw, then then do some Math in shaders with polar coordinates to get new position of pixel and then finally draw pixel.
Using this way I'll be able to change shape of image from polygon shape to whatever I want.
But I cannot find anything about this method (get pixel first, then do the Math and get new position for pixel).
Is there something like this, please?
OpenGL historically doesn't work that way around; it forward renders — from geometry to pixels — rather than backwards — from pixel to geometry.
The most natural way to achieve what you want to do is to calculate texture coordinates based on geometry, then render as usual. For a cylindrical mapping:
establish a mapping from cylindrical coordinates to texture coordinates;
with your actual geometry, imagine it placed within the cylinder, then from each vertex proceed along the normal until you intersect the cylinder. Use that location to determine the texture coordinate for the original vertex.
The latter is most easily and conveniently done within your geometry shader; it's a simple ray intersection test, with attributes therefore being only vertex location and vertex normal, and texture location being a varying that is calculated purely from the location and normal.
Extemporaneously, something like:
// get intersection as if ray hits the circular region of the cylinder,
// i.e. where |(position + n*normal).xy| = 1
float planarLengthOfPosition = length(position.xy);
float planarLengthOfNormal = length(normal.xy);
float planarDistanceToPerimeter = 1.0 - planarLengthOfNormal;
vec3 circularIntersection = position +
(planarDistanceToPerimeter/planarLengthOfNormal)*normal;
// get intersection as if ray hits the bottom or top of the cylinder,
// i.e. where |(position + n*normal).z| = 1
float linearLengthOfPosition = abs(position.z);
float linearLengthOfNormal = abs(normal.z);
float linearDistanceToEdge = 1.0 - linearLengthOfPosition;
vec3 endIntersection = position +
(linearDistanceToEdge/linearLengthOfNormal)*normal;
// pick whichever of those was lesser
vec3 cylindricalIntersection = mix(circularIntersection,
endIntersection,
step(linearDistanceToEdge,
planarDistanceToPerimeter));
// ... do something to map cylindrical intersection to texture coordinates ...
textureCoordinateVarying =
coordinateFromCylindricalPosition(cylindricalIntersection);
With a common implementation of coordinateFromCylindricalPosition possibly being simply return vec2(atan(cylindricalIntersection.y, cylindricalIntersection.x) / 6.28318530717959, cylindricalIntersection.z * 0.5);.

GLSL texture distortion

I want to create a simple heat distortion on my texture, but can't seem to figure out the steps required to accomplish this. So far, I've been able to change pixel colors the following way (using pixel shader):
varying vec3 v_Position;
varying vec4 v_Color;
varying vec3 v_Normal;
varying vec2 v_TexCoordinate;
void main()
{
var col = texture2D(u_Texture, v_TexCoordinate);
col.r = 0.5;
gl_FragColor = col;
}
This is where I get lost. How can I modify pixel locations to distort the texture? can I set any other properties, but gl_FragColor? or do I have to create a plane with many vertices and distort the vertex locations? Is it possible to get 'neighbour' pixel color values? Thanks!
How can I modify pixel locations to distort the texture?
By modifying the location from which you sample the texture. That would be the second parameter of texture2D
var col = texture2D(u_Texture, v_TexCoordinate);
^-------------^
Texture distortion goes here
Is it possible to get 'neighbour' pixel color values?
Yes, and that's the proper way to do it. In a fragment shader the location you're writing to is immutable¹, so it has all to be done through the fetch location. Also note that you can sample from the same texture an arbitrary² number of times, which enables you to implement blurring³ effects.
¹: writes to freely determined image locations (scatter writes) are supported by OpenGL-4 class hardware, but scatter writes are extremely inefficient and should be avoided.
²: there's a practical limit of the total runtime of the shader, which may be limited by the OS, and also by the desired frame rate.
³: blurring effects should be implemented using so called separable filters for largely improved performance.

Geometry Shader Quad Post Processing

Using directx 11, I'm working on a graphics effect system that uses a geometry shader to build quads in world space. These quads then use a fragment shader in which the main texture is the rendered scene texture. Effectively producing post process effects on qorld space quads. The simplest of which is a tint effect.
The vertex shader only passes the data through to the geometry shader.
The geometry shader calculates extra vertices based on a normal. Using cross product, I find the x and z axis and append the tri-stream with 4 new verts in each diagonal direction from the original position (generating a quad from the given position and size).
The pixel shader (tint effect) simply multiplies the scene texture colour with the colour variable set.
The quad generates and displays correctly on screen. However;
The problem that I am facing is the mapping of the uv coordinates fails to align with the image on the back buffer. That is, when using the tint shader with half alpha as the given colour you can see the image displayed on the quad does not overlay the image on the back buffer perfectly, unless the quad facing towards the camera. The closer the quad normal matches the cameras y axis, the more the image is skewed.
I am currently using the formula below to calculate the uv coordinates:
float2 uv = vert0.position.xy / vert0.position.w;
vert0.uv.x = uv.x * 0.5f + 0.5f;
vert0.uv.y = -uv.y * 0.5f + 0.5f;
I have also used the formula below, which resulted (IMO) in the uv's not taking perspective into concideration.
float2 uv = vert0.position.xy / SourceTextureResolution;
vert0.uv.x = uv.x * ASPECT_RATIO + 0.5f;
vert0.uv.y = -uv.y + 0.5f;
Question:
How can I obtain screen space uv coordinates based on a vertex position generated in the geometry shader?
If you would like me to elaborate on any points please ask and i will try my best :)
Thanks in advance.

Direct3D9 Calculating view space point light position

I am working on my own deffered rendering engine. I am rendering the scene to the g-buffer containing diffuse color, view space normals and depth (for now). I have implemented directional light for the second rendering stage and it works great. Now I want to render a point light, which is a bit harder.
I need the point light position for the shader in view space because I have only depth in the g-buffer and I can't afford a matrix multiplication in every pixel. I took the light position and transformed it by the same matrix, by which I transform every vertex in shader, so it should align with verices in the scene (using D3DXVec3Transform). But that isn't the case: transformed position doesn't represent viewspace position nearly at all. It's x,y coordinates are off the charts, they are often way out of the (-1,1) range. The transformed position respects the camera orientation somewhat, but the light moves too quick and the y-axis is inverted. Only if the camera is at (0,0,0), the light stands at (0,0) in the center of the screen. Here is my relevant rendering code executed every frame:
D3DXMATRIX matView; // the view transform matrix
D3DXMATRIX matProjection; // the projection transform matrix
D3DXMatrixLookAtLH(&matView,
&D3DXVECTOR3 (x,y,z), // the camera position
&D3DXVECTOR3 (xt,yt,zt), // the look-at position
&D3DXVECTOR3 (0.0f, 0.0f, 1.0f)); // the up direction
D3DXMatrixPerspectiveFovLH(&matProjection,
fov, // the horizontal field of view
asp, // aspect ratio
znear, // the near view-plane
zfar); // the far view-plane
D3DXMATRIX vysl=matView*matProjection;
eff->SetMatrix("worldViewProj",&vysl); //vertices are transformed ok ín shader
//render g-buffer
D3DXVECTOR4 lpos; D3DXVECTOR3 lpos2(0,0,0);
D3DXVec3Transform(&lpos,&lpos2,&vysl); //transforming lpos into lpos2 using vysl, still the same matrix
eff->SetVector("poslight",&lpos); //but there is already a mess in lpos at this time
//render the fullscreen quad with wrong lighting
Not that relevant shader code, but still, I see the light position this way (passing IN.texture is just me being lazy):
float dist=length(float2(IN.texture0*2-1)-float2(poslight.xy));
OUT.col=tex2D(Sdiff,IN.texture0)/dist;
I have tried to transform a light only by matView without projection, but the problem is still the same. If I transform the light in a shader, it's the same result, so the problem is the matrix itself. But it is the same matrix as is transforming the vertices! How differently are vertices treated?
Can you please take a look at the code and tell me where the mistake is? It seems to me it should work ok, but it doesn't. Thanks in advance.
You don't need a matrix multiplication to reconstruct view position, here is a code snippet (from andrew lauritzen deffered light example)
tP is the projection transform, position screen is -1/1 pixel coordinate and viewspaceZ is linear depth that you sample from your texture.
float3 ViewPosFromDepth(float2 positionScreen,
float viewSpaceZ)
{
float2 screenSpaceRay = float2(positionScreen.x / tP._11,
positionScreen.y / tP._22);
float3 positionView;
positionView.z = viewSpaceZ;
positionView.xy = screenSpaceRay.xy * positionView.z;
return positionView;
}
Result of this transform D3DXVec3Transform(&lpos,&lpos2,&vysl); is a vector in homogeneous space(i.e. projected vector but not divided by w). But in you shader you use it's xy components without respecting this(w). This is (quite probably) the problem. You could divide vector by its w yourself or use D3DXVec3Project instead of D3DXVec3Transform.
It's working fine for vertices as (I suppose) you mul them by the same viewproj matrix in the vertex shader and pass transformed values to interpolator where hardware eventually divides it's xyz by interpolated 'w'.

Resources