Three.js Render Depth to Texture - three.js

I'm having a large number of meshes in threejs. In order to render them efficiently I merge them by materials. However, I want to select them with mouse.
My approach is the following: In one rendering pass I bake the merged meshed into a texture and in a second pass I render only the highlighted as a transparent overlay. So far, it almost works except for wrong visibility. The problem is that as I use WebGLRenderTarget it stores only the FBO into the texture. I would actually need a second texture to fetch DepthBuffer, ideally without a third rendering pass. I did not find anything related in the Three.js documentation. Any ideas?

I think you need to think differently. You cannot use a depth texture to write into the depth buffer. The only way to write into the depth buffer is to render primitives.
How about this:
Bake your scene into a texture but render depth into the on-screen depth buffer.
Keep the depth buffer in the second pass
Render your baked texture with depth tests and depth writes disabled: gl.disable(gl.DEPTH_TEST); gl.depthMask(false);
Render your selected object(s) with the highlight material

Related

Alpha Blending and face sorting using OpenGL and GLSL

I'm writing a little 3D engine. I've just added the alpha blending functionality in my program and I wonder one thing: do I have to sort all the primitives compared with the camera?)
Let's take a simple example : I have a scene composed by 1 skybox and 1 tree with alpha blended leafs!
Here's a screenshot of a such scene:
Until here all seems to be correct concerning the alpha blending of the leafs relative to each others.
But if we get closer...
... we can see there is a little trouble on the top right of the image (the area around the leaf forms a quad).
I think this bug comes from the fact these two quads (primitives) should have been rendered later than the ones in back.
What do you think about my supposition ?
PS: I want to precise all the geometry concerning the leafs is rendered in just one draw call.
But if I'm right it would means when I need to render an alpha blended mesh like this tree I need update my VBO each time my camera is moving by sorting all the primitives (triangles or quads) from the camera's point of view. So the primitives in back should be rendered in first...
What do you think of my idea?

OpenGL ES. Hide layers in 2D?

For example I have 2 layers: background and image. In my case I must show or hide an image on zoom value changed (simply float variable).
The only solution I know is to keep 2 various frame buffers for both background and image and not to draw the image when it is not necessary.
But is it possible to do this in an easier way?
Just don't pass the geometry to glDrawArrays() for the layer you want to hide when the zoom occurs. OpenGL ES completely re-renders everything every frame. You should have a glClear() call at the start of your frame render loop. So, removing something is done by just not sending its triangles. You might need to divide your geometry into separate lists for each layer.

How to access the depth value of the neighbor pixels in WebGL?

I want to make a toon border effect. For it, I'll use the depth value of the neighbor pixels of each pixel to determine if it is or isn't supposed to be blacked. How can I access that information inside the fragment shader?
When you render your scene in a normal way (vertex shader, then fragment shader - single pass) then in the fragment shader there is no way to access depth values for another pixels.
But:
You can render scene twice and perform some postprocessing effects. In the first run you store depth values and others (like normals, etc) in RenderTarget (in texture) then you use those textures in the second pass.
Here you have effect from XNA, but can be quickly ported to GLSL: http://xnameetingpoint.weebly.com/shader7f31.html
Here some link about Render to Texture: http://learningwebgl.com/blog/?p=1786
Hint: depth values will not be enough for border detection, you have to use normals as well. But it is covered in the above tutorial from XNA.

OpenGL ES : Pre-Rendering to FBO texture

Is it possible to render to FBO texture once and then use the resulting texture handle to render all following frames?
For example, in case I'm rendering a hard shadow map and the scene geometry and light position are static, the depth map is always the same and I want to render it only once using a FBO and then just use it after that. However, if I simply put a flag to render the depth texture once, the texture remains empty for the rest of the frames.
Is FBO get reallocated after rendering a frame has been complete? What would be the right way to preserve rendered texture for rendering of the following frames?
Rendering to a texture is no different than if you had uploaded those pixels to the texture in the first place. The contents of a texture do not magically disappear. A texture's contents are changed when you change them. This could be by uploading data to the texture, or by setting one of the texture's images to be used for framebuffer operations (clearing, rendering to it, etc).
Unless you do something to explicitly change the data stored in a texture, it won't change.

How can I read the depth buffer in WebGL?

Using the WebGL API, how can I get a value from the depth buffer, or in any other way determine 3D coordinates from screen coordinates (i.e. to find a location clicked on), other than by performing my own raycasting?
Several years have passed, these days the WEBGL_depth_texture extension is widely available... unless you need to support IE.
General usage:
Preparation:
Query the extension (required)
Allocate a separate color and depth texture (gl.DEPTH_COMPONENT)
Combine both textures in to a single framebuffer (gl.COLOR_ATTACHMENT0, gl.DEPTH_ATTACHMENT)
Rendering:
Bind the framebuffer, render your scene (usually a simplified version)
Unbind the framebuffer, pass the depth texture to your shaders and read it like any other texture:
texPos.xyz = (gl_Position.xyz / gl_Position.w) * 0.5 + 0.5;
float depthFromZBuffer = texture2D(uTexDepthBuffer, texPos.xy).x;
I don't know if it's possible to directly access the depth buffer but if you want depth information in a texture, you'll have to create a rgba texture, attach it as a colour attachment to an frame buffer object and render depth information into the texture, using a fragment shader that writes the depth value into gl_FragColor.
For more information, see the answers to one of my older questions: WebGL - render depth to fbo texture does not work
If you google for opengl es and shadow mapping or depth, you'll find more explanations and example source code.
From section 5.13.12 of the WebGL specification it seems you cannot directly read the depth buffer, so maybe Markus' suggestion is the best way to do it, although you might not neccessarily need an FBO for this.
But if you want to do something like picking, there are other methods for it. Just browse SO, as it has been asked very often.
Not really a duplicate but see also: How to get object in WebGL 3d space from a mouse click coordinate
Aside of unprojecting and casting a ray (and then performing intersection tests against it as needed), your best bet is to look at 'picking'. This won't give exact 3D coordinates, but it is a useful substitute for unprojection when you only care about which object was clicked on, and don't really need per-pixel precision.
Picking in WebGL means to render the entire scene (or at least, the objects you care about) using a specific shader. The shader renders each object with a different unique ID, which is encoded in the red and green channels, using the blue channel as a key (non-blue means no object of interest). The scene is rendered into an offscreen framebuffer so that it's not visible to the end user. Then you read back, using gl.readPixels(), the pixel or pixels of interest and see which object ID was encoded at the given position.
If it helps, see my own implementation of WebGL picking. This implementation picks a rectangular region of pixels; passing in a 1x1 region results in picking at a single pixel. See also the functions at lines 146, 162, and 175.
As of January 23, 2012, there is a draft WebGL extension to enable depth buffer reading, WEBGL_depth_texture. I have no information about its availability in implementations, but I do not expect it at this early date.

Resources