What components of a three.js scene need to be recreated when restoring the webglcontext?
For example, can I;scene.add( myoldcamera );
Or do I need to do;scene.add( new Camera() );
And is it different depending on the object type? e.g. Materials, Lights, Meshes etc.
When a context is lost in a webGL program, it never gets back automatically , we need to re-create textures, buffers, framebuffers, renderbuffers, shaders, programs, and setup state (clearColor, blendFunc, depthFunc, etc...) on webglcontextrestored event.
Here you can get more info .
Related
I’m having a hard time getting an aoMap working in three.js…
I have a glb asset with an aoMap on the red channel or something. When I bring it into to the babylon viewer, I can see the ao just fine, but it wont show up in the three.js viewer or my project. I think this has something to do with a second set of uvs, but I can't find a resource that involves doing that on top of using the gltf loader… I really don't know what to do here. Any response would be greatly appreciated!
Here is my code (I’m using a html-canvas as the texture)
And I get the model’s geometry and diffuse texture (all white) as desired, but the aomap isnt showing…
code
babylon viewer
three.js viewer
working application with shadows included in diffuse
not working, diffuse is just white, and aoMap is not showing
You're right about needing a second set of UVs. The reason behind this is that diffuse textures often repeat (think of a brick wall, or checkered t-shirt). AO shading, however, is more likely to be unique on each part of the geometry, so it's almost never repetitive. Since this often would need an alternative UV mapping method, the default is to use a second set of UVs.
You could do 2 things:
Re-export your GLTF asset with a duplicate set of UVs.
Duplicate existing UVs in Three.js by creating a new BufferAttribute in your geometry:
// Get existing `uv` data array
const uv1Array = mesh.geometry.getAttribute("uv").array;
// Use this array to create new attribute named `uv2`
mesh.geometry.setAttribute( 'uv2', new THREE.BufferAttribute( uv1Array, 2 ) );
.getAttribute and .setAttribute are methods of BufferGeometry, if you want to read more about them.
I’m using three.js inside an experimental augmented-reality web browser. (The browser is called Argon. Essentially, Argon uses Qualcomm’s Vuforia AR SDK to track images and objects in the phone camera. Argon sends the tracking information into Javascript, where it uses transparent web pages with three.js to create 3D graphics on top of the phone video feed.) My question is about three.js, however.
The data Argon sends into the web page allows me to align the 3D camera with the physical phone camera and draw 3D graphics such that they appear to align with the real world as expected. I would also like to have some of the things in the physical world occlude the 3D graphics (I have 3D models of the physical objects, because I’ve set the scene up or because they are prepared objects like boxes that are being tracked by Vuforia).
I’m wondering if folks have suggestions on the best way to accomplish this occlusion with three.js. Thanks.
EDIT: it appears that the next version of three.js (R71) will have a simpler way to do this, so if you can use the dev branch (or just wait), you can do this much more easily. See this post: three.js transparent object occlusion
MY ORIGINAL ANSWER (without using the new features in R71):
I think the best way to do this is (to avoid extra work by creating new rendering passes for example) to modify the WebGL renderer (src/renderers/WebGLRenderer.js) and add support for a new kind of object, perhaps call them “occlusionObjects”.
If you look in the renderer, you will see two current object lists, opaqueObjects and transparentObjects. The renderer sorts the renderable objects into these two lists, so that it can render the opaque objects first, and then the transparent objects after them. What you need to do is store all of your new objects into the occlusionObjects list rather than those two. You will see that the opaque and transparent objects are sorted based on their material properties. I think here, you may want to add a property to an object you want to be an occluder (“myObject.occluder = true”, perhaps), and just pull those objects out.
Once you have the three lists, look what the render() function does with these object lists. You’ll see a couple of places with rendering calls like this:
renderObjects( opaqueObjects, camera, lights, fog, true, material );
Add something like this before that line, to turn off writing into the color buffers, render the occlusion objects into the depth buffer only, and then turn color buffer writes back on before you render the remaining objects.
context.colorMask( false, false, false, false);
renderObjects( occluderObjects, camera, lights, fog, true, material );
context.colorMask(true, true, true, true);
You’ll need to do this in a couple of places, but it should work.
Now you can just mark any objects in your scene as “occluder = true” and they will only render into the depth buffer, allowing the video to show through and occluding any opaque or transparent objects rendered behind them.
It appears that THREE.js sends mesh (geometry and material) information to the card only when an object is first rendered. Unfortunately this can cause noticeable hiccups in frame rate when a new object comes on the scene.
Is there a way to utilize the three.js framework (or is there a parameter I'm missing) to send the mesh data down to the card immediately after the associated resources are loaded, rather than on first render? I've considered creating a temporary / off-screen scene that I could put each object into on load, render once, and the discard. I've also tried calling the low-level renderer functions directly with mock data to force the write. It works, but both are hacks.
Any suggestions?
Three.js r67.
I'm working with the three.js editor where I parse an object from JSON format. As usual it first parses the materials and geometries, then I create meshes from it. While parsing materials I also load textures. The issue now is that I have to call...
object.geometry.uvsNeedUpdate = true;
object.geometry.buffersNeedUpdate = true;
... after the image for the texture completely loaded - but why?! The geometry never changed before, neither did its uvs or anything like it. It's still the plain old geometry, yet I always get a GL ERROR :GL_INVALID_OPERATION : glDrawElements: attempt to access out of range vertices in attribute 2 when trying to render. It only works with this "hack" although the geometry is always the same.
In my opinion it should also work perfectly when I update the uvs after object creation (or not at all). I didn't find anything in the three.js editor code that would update the geometry or its faceVertexUvs.
I know it's a bit of an abstract problem, I'm mainly looking for some hints or insights why this hack might be necessary.
Thanks!
Three.js "guesses" whether uvs are needed according to your used textures in bufferGuessUVType. If you want to preallocate uv buffers you can either init a map attribute with an empty THREE.Texture, update the geometry after the map was assigned, etc.
thanks for reading.
I have a WebGLRenderTarget that I render to. At a subsequent stage in the rendering process I use that texture as input to a shader.
I would like to be able to use that same render target with multiple wrappings/filterings. I have looked some at the internals of three.js, and am not sure that it is possible.
It seems like in webGL I would be able to just bind the same texture multiple times with different parameter settings. I figure I can fork three.js to support a new type of texture that just uses another texture with new parameters, but wanted to see if there was some way I could do this without forking.
Thanks in advance!