I've a project that actually loads and render 3 different scenes in three different area of the site.
At the moment, every time i need to change scene, i remove the canvas (an all Threejs listeners and iterators) and render the new scene from the scratch.
Is it a good practice or there are performance benefits from creating a unique scene with it's renderer, and loading inside it the different meshes, lights and cameras from the three different scenes?
Anyone have already test a similar scenario?
Are you looking for something like this?
If you are looking for showing one scene at time, I think the best option is to create the scenes and render only the selected one. See THREE.WebGLRenderer.render().
Performance wise it would probably be slightly better to use a single renderer-instance and different scene-/camera-objects passed to it's render()-call and (if necessary) relocating the renderer.domElement on your site. That way you don't need to have multiple canvases and rendering-contexts around and the renderer may be able to cache a few webgl-related things. This method also has the possibility to reuse things like materials across scenes.
However, the performance-difference should be minimal and only (or at least mostly) initialisation-cost in a range of max. 100ms (depends of course on what exactly you are doing, but this would be my guess).
So if it feels better to have everything isolated (and there might be good reasons for that as well depending on how your app is built): Just keep it that way and measure if it has any negative impact (my guess: it probably won't).
Related
I am currently facing the problem that rendering custom geometry into my forge is allocation a huge amount of memory. I am using the technique suggested on the autodesk website: https://forge.autodesk.com/en/docs/viewer/v7/developers_guide/advanced_options/custom-geometry/.
I need to render up to 300 custom geometrys into my viewer but by trying to do so the site is simply crashing and my memory jumps over 5 GB. Is there any good way to render a large amount on custom geometry into the forge and keeping the performance up to a usefull level?
Thx, JT
I'm afraid this is not something Forge Viewer will be able to help with. Here's why:
The viewer contains many interesting optimizations for efficient loading and rendering of a single complex model. For example, it builds a special BVH of all the geometries in a model so that they can be traversed and rendered in an efficient way. The viewer can also handle very complex models by moving their geometry data in and out of the GPU to make room for others. But again, all this assumes that there's just one (or only a few) models in the scene. If you try and add hundreds of models into the scene instead, many of these optimizations cannot be applied anymore, and in some cases they can even make things worse (imagine that the viewer suddenly has to traverse 300 BVHs instead of a single one).
So what I would recommend would be: try and avoid situations where you would have hundreds of separate models in the scene. If possible, consider "consolidating" them into a single model. For example, if the 300 models were Inventor assemblies that you need to place at specific positions, you could:
aggregate all the assemblies using Design Automation for Inventor, and convert the result into a single Forge model, or
create a single Forge model with all 300 geometries, and then move them around during runtime using the Viewer APIs
If none of these options work for you, you could also take a look at the new format called SVF2 (https://forge.autodesk.com/blog/svf2-public-beta-new-optimized-viewer-format) which significantly reduces the memory footprint.
I was hoping to display in my app as many as 3,000 instances of the same model at the same time - but it’s really slowing down my computer. It's just too much.
I know InstancedMesh is the way to go for something like this so I’ve been following THREE.js’s examples here: https://threejs.org/docs/#api/en/objects/InstancedMesh
The examples are fantastic, but they seem to use really small models which makes it really hard to get a good feel for what the model size-limits should be.
For example:
-The Spheres used here aren't imported custom 3D models, they're just instances of IcosahedronGeometry
-The Flower.glb model used in this example is tiny: it only has 218 Vertices on it.
-And the “Flying Monkeys” here come from a ".json" file so I can’t tell how many vertices that model has.
My model by comparison has 4,832 Vertices - which by the way, was listed in the "low-poly" category where I found it, so it's not really considered particularly big.
In terms of file-size, it's exactly 222kb.
I tried scaling it down in Blender and re-exporting it - still came out at 222kb.
Obviously I can take some “drastic measures”, like:
-Try to re-design my 3D model and make it smaller - but that would greatly reduce it’s beauty and the overall aesthetics of the project
-I can re-imagine or re-architect the project to display maybe 1,000 models at the same time instead of 3,000
etc.
But being that I’m new to THREE.js - and 3D modeling in general, I just wanted to first ask the community if there are any suggestions or tricks to try out first before making such radical changes.
-The model I'm importing is in the .glTF format - is that the best format to use or should I try something else?
-All the meshes in it come into the browser as instances of BufferGeometry which I believe is the lightest in terms of memory demands - is that correct?
Are there any other things I need to be aware of to optimize performance?
Some setting in Blender or other 3D modeling software that can reduce model-size?
Some general rules of thumb to follow when embarking on something like this?
Would really appreciate any and all help.
Thanks!
GLTF is fine to transmit geometry and materials — I might say the standard right now. If there's only geometry, I'd see OBJ or PLY formats.
The model size is blocking, but only for the initial load if we employ instancing on its geometry and material. This way we simply re-use the already generated geometry and its material.
At the GPU level, instancing means drawing a single mesh with a single material shader, many times. You can override certain inputs to the material for each instance, but it sort of has to be a single material.
— Don McCurdy
Our biggest worry here would be the triangles or faces rendered. Lower counts of this are more performant, and thus, fewer models at a time are. For this, you can use some degree of LOD to progressively increase and decrease your models' detail until you stop rendering them at a distance.
Some examples/resources to get you started:
LOD
Instancing Models
Modifying Instances
I have written code to render a scene containing lights that work like projectors. But there are a lot of them (more than can be represented with a fixed number of lights in a single shader). So right now I render this by creating a custom shader and rendering the scene once for each light. Each pass the fragment shader determines the contribution for that light and the blender adds in that contribution to the backbuffer. I found this really awkward to set up in three.js. I couldn't find a way of doing multiple passes like this where there were different materials and different geometry required for the different passes. I had to do this by having multiple scenes. The problem there is that I can't have an object3d that is in multiple scenes (Please correct me if I'm wrong). So I need to create duplicates of the objects - one for each scene it is in. This all starts looking really hacky quickly. It's all so special that it seems to be incompatible with various three.js framework features such as VR Rendering. Each light requires shadowing, but I don't have memory for a shadow buffer for each light, so it alternates between rendering the shadow buffer for the light, then the accumulation phase for that light, then the shadow buffer for the next light, then the accumulator the the next light, etc.
I'd much rather set this up in a more "three.js" way. I seem to be writing hack upon hack to get this working, each time forgoing yet another three.js framework feature that doesn't working properly in conjunction with my multi-pass technique. But it doesn't seem like what I'm doing is so out of the ordinary.
My main surprise is that I can't figure out a way to set up a multi-pass scene that does this back and forth rendering and accumulating. And my second surprise is that the Object3D's that I create don't like being added to multiple scenes a the same time, so I have to create duplicates of each object for each scene it wants to be in, in order to keep their states from interfering with each other.
So is there a way of rendering this kind of multi-pass accumulative scene in a better way? Again, I would describe it as a scene with > the max number of lights allows in a single shader pass so their contributions need to be alternatively rendered (shadow buffers) and then additively accumulated in multiple passes. The lights work like typical movie projectors that project an image (as opposed to being a uniform solid color light source).
How can I do multi-pass rendering like this and still take advantage of good framework stuff like stereo rendering for VR and automatic shadow buffer creation?
Here's a simplified snippet that demonstrates the scenes that are involved:
renderer.render(this.environment.scene, camera, null);
for (let i = 0, ii = this.projectors.length; i < ii; ++i) {
let projector = this.projectors[i];
renderer.setClearColor(0x000000, 1);
renderer.clearTarget(this.shadowRenderTarget, true, true, false);
renderer.render(projector.object3D.depthScene, projector.object3D.depthCamera, this.shadowRenderTarget);
renderer.render(projector.object3D.scene, camera);
}
renderer.render(this.foreground.scene, camera, null);
There is a scene that renders lighting from the environment (done with normal lighting) then there is a scene per projector that computes the shadow map for the projector and then adds in the light contribution from each projector, then there is a "foreground" scene with overlays and UI stuff in it.
Is there a more "three.js" way?
Unfortunately i think the answer is no.
I'd much rather set this up in a more "three.js" way. I seem to be writing hack upon hack to get this working,
and welcome to the world of three.js development :)
scene graph
You cannot have a node belong to multiple parents. I believe three also does not allow you to do this:
const myPos = new THREE.Vector3()
myMesh_layer0.position = myPos
myMesh_layer1.position = myPos
It wont work with eulers, quaternions or a matrix.
Managing the matrix updates in multiple scenes would be tricky as well.
the three.js way
There is no way to go about the "hack upon hack" unless you start hacking the core.
Notice that it's 2018 but the official way of including three.js into your web app is through <src> tags.
This is a great example of where it would probably be a better idea not to do things the three.js way but the modern javascript way ie use imports, npm installs etc.
Three.js also does not have a robust core that allows you to be flexible with code around it. It's quite obfuscated and conflated with a limited number of hooks exposed that would allow you to write effects such as you want.
Three is often conflated with it's examples, if you pick a random one, it will be written in a three.js way, but far from what best javascript/coding practices are, at least today.
You'll often find large monolithic files, that would benefit from being broken up.
I believe it's still impossible to import the examples as modules.
Look at the material extensions examples and consider if you would want to apply that pattern in your project.
You can probably encounter more pain points, but this is enough to illustrate that the three.js way may not always be desirable.
remedies
Are few and far between. I've spent more than a year trying to push the onBeforeRender and onAfterRender hooks. It seems useful and allowed for some refactors, but another feature had to be nuked first.
The other feature was iterated on during the course of that year and only addressed a single example, until it was made obvious that onBeforeRender would address both the example, and allow for much more.
This unfortunately also seems to be the three.js way. Since the base is so big and includes so many examples, it's more likely that someone would try to optimize a single example, then try to find a common pattern for refactoring a whole bunch of them.
You could go and file an issue on github, but it would be very hard to argue for something as generic as this. You'd most likely have to write some code as a proposal.
This can become taxing quite quick, because it can be rejected, ignored, or you could be asked to provide examples or refactor existing ones.
You mentioned your hacks failing to work with various three's features, like VR. This i think is a problem with three, VR has been the focus of development for the past couple of years at least, but without ever addressing the core issues.
The good news is, three is more modular than it was ever before, so you can fork the project and tweak the parts you need. The issues with three than may move to a higher level, if you find some coupling in the renderer for example that makes it hard to sync your fork, it would be easier to explain than the whole goal of your particular project.
We use three.js as the foundation of our WebGL engine and up until now, we only used traditional alpha blending with back-to-front sorting (which we customized a little to match the desired behavior) in our projects.
Goal: Now our goal is to incorporate the order-independent transparency algorithm proposed by McGuire and Bavoil in this paper trying to rid ourselves of the
usual problems with sorting and conventional alpha blending in complex scenes. I got it working without much hassle in a small, three.js based prototype.
Problem: The problem we have in the WebGL engine is that we're dealing with object hierarchies consisting of both opaque and translucent objects which are currently added to the same scene so three.js will handle transform updates. This, however, is a problem, since for the above algorithm to work, we need to render to one or more FBOs (the latter due to the lack of support for MRTs in three.js r79) to calculate accumulation and revealage and finally blend the result with the front buffer to which opaque objects have been previously rendered and in fact, this is what I do in my working prototype.
I am aware that three.js already does separate passes for both types of objects but I'm not aware of any way to influence to which render target three.js renders (render(.., .., rt,..) is not applicable) and how to modify other pipeline state I need. If a mixed hierarchy is added to a single scene, I have no idea how to tell three.js where my fragments are supposed to end up and in addition, I need to reuse the depth buffer from the opaque pass during the transparent pass with depth testing enabled but depth writes disabled.
Solution A: Now, the first obvious answer would be to simply setup two scenes and render opaque and translucent objects separately, choosing the render targets as we please and finally do our compositing as needed.
This would be fine except we would have to do or at least trigger all transformation calculations manually to achieve correct hierarchical behavior. So far, doing this seems to be the most feasible.
Solution B: We could render the scene twice and set all opaque and transparent materials visible flag to false depending on which pass were currently doing.
This is a variant of Solution A, but with a single scene instead of two scenes. This would spare us the manual transformation calculations but we would have to alter materials for all objects per pass - not my favorite, but definitely worth thinking about.
Solution C: Patch three.js as to allow for more control of the rendering process.
The most simple approach here would be to tell the renderer which objects to render when render() is called or to introduce something like renderOpaque() and renderTransparent().
Another way would be to somehow define the concept of a render pass and then render based on the information for that pass (e.g. which objects, which render target, how the pipeline is configured and so on).
Is someone aware of other, readily accessible approaches? Am I missing something or am I thinking way too complicated here?
I have a scenario such that I have a wall and whenever player hits the walls it falls into pieces .Each of the piece is the Child GO of that wall.Each of the piece is using same material .So I tried Dynamic Batching on that but it didn't reduce the draw calls.
Then I tried CombineChildren.cs it worked and combined all meshes thus reducing the drawcalls but whenever my player hit the wall it didn't played the animation.
I cannot try SkinnedMeshCombiner.cs from this wiki with the answer from this link check this
beacause my game objects have Mesh renders rather than Skinned Mesh renderer
Is there any other solution I can do for that?
So I tried Dynamic Batching on that but it didn't reduce the draw
calls.
Dynamic batching is almost the only way to go if you want to reduce draw calls for moving object (unless you try to implement something similar on your own).
However it has some limitations as explained in the doc.
Particularly:
limits on vertex numbers for objects to be batched
all must share the same material (not different instances of the same material - be sure to check this)
no lightmapped objects
no multipass shaders
Be sure all limitations listed in the doc are met, then dynamic batching should work.
Then I tried CombineChildren.cs it worked and combined all meshes thus
reducing the drawcalls but whenever my player hit the wall it didn't
played the animation.
That's because after been combined, all GameObjects are combined into an unique one. You can't move them indipendently no more.