I am using instancing to draw a large set of billboards.
I need to sort these instances by distance to camera to fix transparency artifacts.
Ideally I would like to sort the instance buffer on the GPU using shaders.
The articles I've read use textures to sort items. But is it possible to directly sort the instance buffer? Or quickly transfer the data from the texture to the instance buffer?
Ok, I just found the bit I was missing. (sorry I've been reading articles for days without finding how).
I must store instance data in a Buffer Texture.
https://www.opengl.org/wiki/Buffer_Texture
It's a buffer that can also be accessed as a texture.
It can therefore be used as texture by the fragment shader when sorting.
And it should be accessible as an attribute in the vertex shader when drawing the instances.
Related
I need to render a texture as a library of small parts, then copy each part to a set of different textures used in the scene. I think this is fundamental and I cannot find documentation specifically in three.js. Should I seek a solution in pure WEBGL, or is there a three.js solution?
EDIT:
One "solution" would be to set the UV coordinates of the target mesh (plane, whatever) to correspond only to the desired part of the big texture, but that would definitely be very inefficient, as it would require for each small mesh to load the whole big texture a huge waste of memory and GPU power as the whole big texture would be probably transformed multiple times(!), aside the inconvenience of setting the different UV coordinates on each mesh object.
So, I'm looking to copy and assign only a small part and occupy in memory and process only that part for each mesh.
I tried the Instancedbuffergeometry, it works awesome,
Intersection is not happening in InstancedBufferGeometry, i checked in the threejs(r85) library, checkBufferGeometryIntersection function have the position value only, I think the offset and orientation value need to use with the position.
I have another doubt in it, i have used one rawshadermaterial only, then how i can highlight the selected geometry.
Can anyone guide in it.
Thanks in advance.
As far as the cpu is concerned (where you do the raycasting) those instances do not exist. You do however have your master geometry available. What you can do is, create another instance of BufferGeometry then create the same number of Mesh objects using that one instance of geometry. Use the same logic for instancing to place this into a scene. You don't render them, thus saving the overhead from multiple draw calls. You do have them available for intersection though as if it were normal geometry, because it is (you're just not rendering it).
As #pailhead already wrote, raycasting with instanced-geometries cannot work.
An alternative approach to achieve the same goal is to use so-called GPU picking. For this you render the scene into a framebuffer, using a special shader that will just output a unique color-value for every instance.
You can then sample the point under the cursor from that framebuffer and compute the instance-id from the color-value.
You can see an example for this technique here or here.
Using the WebGL API, how can I get a value from the depth buffer, or in any other way determine 3D coordinates from screen coordinates (i.e. to find a location clicked on), other than by performing my own raycasting?
Several years have passed, these days the WEBGL_depth_texture extension is widely available... unless you need to support IE.
General usage:
Preparation:
Query the extension (required)
Allocate a separate color and depth texture (gl.DEPTH_COMPONENT)
Combine both textures in to a single framebuffer (gl.COLOR_ATTACHMENT0, gl.DEPTH_ATTACHMENT)
Rendering:
Bind the framebuffer, render your scene (usually a simplified version)
Unbind the framebuffer, pass the depth texture to your shaders and read it like any other texture:
texPos.xyz = (gl_Position.xyz / gl_Position.w) * 0.5 + 0.5;
float depthFromZBuffer = texture2D(uTexDepthBuffer, texPos.xy).x;
I don't know if it's possible to directly access the depth buffer but if you want depth information in a texture, you'll have to create a rgba texture, attach it as a colour attachment to an frame buffer object and render depth information into the texture, using a fragment shader that writes the depth value into gl_FragColor.
For more information, see the answers to one of my older questions: WebGL - render depth to fbo texture does not work
If you google for opengl es and shadow mapping or depth, you'll find more explanations and example source code.
From section 5.13.12 of the WebGL specification it seems you cannot directly read the depth buffer, so maybe Markus' suggestion is the best way to do it, although you might not neccessarily need an FBO for this.
But if you want to do something like picking, there are other methods for it. Just browse SO, as it has been asked very often.
Not really a duplicate but see also: How to get object in WebGL 3d space from a mouse click coordinate
Aside of unprojecting and casting a ray (and then performing intersection tests against it as needed), your best bet is to look at 'picking'. This won't give exact 3D coordinates, but it is a useful substitute for unprojection when you only care about which object was clicked on, and don't really need per-pixel precision.
Picking in WebGL means to render the entire scene (or at least, the objects you care about) using a specific shader. The shader renders each object with a different unique ID, which is encoded in the red and green channels, using the blue channel as a key (non-blue means no object of interest). The scene is rendered into an offscreen framebuffer so that it's not visible to the end user. Then you read back, using gl.readPixels(), the pixel or pixels of interest and see which object ID was encoded at the given position.
If it helps, see my own implementation of WebGL picking. This implementation picks a rectangular region of pixels; passing in a 1x1 region results in picking at a single pixel. See also the functions at lines 146, 162, and 175.
As of January 23, 2012, there is a draft WebGL extension to enable depth buffer reading, WEBGL_depth_texture. I have no information about its availability in implementations, but I do not expect it at this early date.
An OpenGL ES sequence like this can be used to render multiple objects in one pass:
glVertexPointer(...params..., vertex_Array );
glTexCoordPointer(...params..., texture_Coordinates_Array );
glBindTexture(...params..., one_single_texture_ID );
glDrawArrays( GL_TRIANGLES, number_Triangles );
Here, the vertex array and texture coordinates array can refer to innumerable primitives that can be described in one step to OpenGL.
But do all these primitives' texture coordinates have to reference the one, single texture in the glBindTexture command?
It would be nice to pass in an array of texture identifiers:
glBindTexture(...params..., texture_identifier_array[] );
Here, there would be a texture ID in the array for every primitive shape described in the preceding calls. So, each shape's texture coordinates would pertain to the texture identified in "texture_identifier_array[]".
I can see one option is to place all textures of interest one one large texture that can be referenced as a single entity in the drawing calls. On my platform, this creates an intermediate step with a large bitmap that might cause memory issues.
It would be best for me to be able to pass an array of texture identifiers to the OpenGL ES drawing calls. Can this be done?
No, that's not possible. You could perhaps emulate it by using a texture array and giving your vertices a texture index. Then in the fragment shader you could look up the right texture with the index, but I doubt that ES supports texture arrays. And even then I don't know if this really works. Or if a texture atlas solution would be much more efficient.
If you want to render multiple versions of the same geometry (what I doubt), you're looking for instanced rendering, which also isn't supported by on ES devices, I think.
So the way to go at the moment will be a texture atlas (multiple textures in one) or just calling glDrawArrays multiple times.
I'm writing a drawing application, and the drawing canvas is an OpenGL texture. When you draw onto the canvas, it determines which region of the canvas texture has been changed, and copies that pixel data out (using glReadPixels) before applying the changes you made.
To undo, I want to simply revert to the previous texture state using that pixel data that was copied out. However, OpenGL ES doesn't provide a glDrawPixels command. What's the best way to do it?
I've considered two options, but I'm not sure either is that great:
Create a temporary texture using the pixels I copied out and draw that in. (However, copied region is not a power of two!)
Unbind the large canvas texture completely, manually alter the bytes of the texture, and then put it back into OpenGL. I'm not using any sort of compression, so this might not be that bad. But it seems like a hack?
Anybody have any ideas? I'd really appreciate it!
In case anyone stumbles across this while trying to do something similar, I've come up with a solution that seems to work well.
Grab an image of the current texture by binding it to the framebuffer and then writing the framebuffer to a CGImageRef.
Create a new CGContext and draw in the existing texture CGImageRef. Then draw old texture data in to the portion that the user changed, effectively "undoing" that change to the image.
Destroy old OpenGL texture and create a texture from the CGContext.
I think this is a pretty slow way of going about things, but I don't need huge performance - my real concern was limiting the amount of data being kept to represent the "old" texture.
If you need help with this (there's quite a bit of code) feel free to email me.