I am using Three.js Raycaster method in my web based car race game. But due to the heavy computations it is consuming a lot of CPU Cycles hence leading to a drop in fps. I am thinking of exporting the RayCaster method of Three.js on WebWorker. Can anyone guide me how to accomplish it , or is it possible at all ?
There was a question about offloading merge geometry method to a web worker, this should help you get your head around the problem.
While the merge geometry method is not ideal for a web worker, other things like physics and perhaps your ray cast method are.
Merging geometries using a WebWorker?
The key being that whatever work you do in the web worker will have to be sent to the main thread as an array of floats.
So you will need to pack your data up and unpack it on the other end.
This works well for physics engines when they are responding with x,y,z coordinates and the entire system is simulated in the web worker and positions are passed back to the main thread for rendering.
Related
I currently have a big impact on the performances of my ThreeJS app when I render the very first frame. It causes the Edge and IE 11 browsers to freeze for 5 seconds with a pop-up indicating "This window does not respond", which may scare my users.
Using the Performance profiler of Chrome, it seems the problem come from several ThreeJS functions you can clearly identify in the screenshot below.
WebGLUniforms.upload : 425ms (50.7% frame rendering time)
WebGLProgram.constructor : 327ms (38.9% frame rendering time)
How can I minimize the duration of the functions call ?
Can I create the program over multiple frames? Or upload the uniforms?
Does the number of materials on my 3D models impact these functions ?
I've tried to hide all the models in the scene and show them one at a time, it seems to prevent the freeze, but each model takes 500ms to show, which is not perfect for user experience. Maybe, it's the only way to go.
Thanks for your time
EDIT : The number of materials or their nature (WebGLStandardMaterial?) seems to affect the performances
Put add a few objects to the scene per frame over time. Three.js inits WebGL resources on first use so if there's 100 objects on your scene all 100 objects get initialized the first time you call renderer.render.
So, just put N objects to the scene, call renderer.render, then next frame add N more to the scene, etc until all the objects have been added.
It's probably most important to do it by material and geometry. In other words if you have 10 different materials and 10 different geometries and you're rendering 100 different models (a model takes one material and one geometry), then you want to make sure the first N models you add don't use all the materials and all the models because those are the things that need to get initialized.
Post processing passes also need initialization so if you're using any of those maybe the first frame just init those, then start adding objects.
I'm currently rendering geological data, and have done so successfully with good results. To clarify, I input a matrix of elevations and output a single static mesh. I do this by creating a single plane for each elevation point, then, after creating all of these individual planes, merge them into a single mesh.
I've been running at 60 FPS even on a Macbook Air, but I want to push the limits. Is using a single PlaneGeometry as a heightmap as described in other terrain examples more efficient, or is it ultimately the same product at the end of the process?
Sorry for a general question without code examples, but I think this is a specific enough topic to warrant a question.
From my own tests that I spent a couple of days running, creating individual custom geometries and merging them as they are created is wildly inefficient, not just in the render loop, but also during the loading process.
The process I am using now is creating a single BufferGeometry with enough width and height segments to contain the elevation data, as described in this article.
we've successfully implemented a demo project based on the earth examples in the helix 3d toolkit and we are creating an effect where the world rotates by using a timer and AddRotateForce on the camera controller. The problem we are having is that when more than a couple hundred worldpoints added to the HelixViewport3D children collection, the rotation effect slows down to the point of not being usable.
is there a better way to add additional children the the world sphere without affecting performance dramatically?
I have a scenario such that I have a wall and whenever player hits the walls it falls into pieces .Each of the piece is the Child GO of that wall.Each of the piece is using same material .So I tried Dynamic Batching on that but it didn't reduce the draw calls.
Then I tried CombineChildren.cs it worked and combined all meshes thus reducing the drawcalls but whenever my player hit the wall it didn't played the animation.
I cannot try SkinnedMeshCombiner.cs from this wiki with the answer from this link check this
beacause my game objects have Mesh renders rather than Skinned Mesh renderer
Is there any other solution I can do for that?
So I tried Dynamic Batching on that but it didn't reduce the draw
calls.
Dynamic batching is almost the only way to go if you want to reduce draw calls for moving object (unless you try to implement something similar on your own).
However it has some limitations as explained in the doc.
Particularly:
limits on vertex numbers for objects to be batched
all must share the same material (not different instances of the same material - be sure to check this)
no lightmapped objects
no multipass shaders
Be sure all limitations listed in the doc are met, then dynamic batching should work.
Then I tried CombineChildren.cs it worked and combined all meshes thus
reducing the drawcalls but whenever my player hit the wall it didn't
played the animation.
That's because after been combined, all GameObjects are combined into an unique one. You can't move them indipendently no more.
I'm working on a very small game engine that uses OpenGL ES 2.0. I'm having a bit of a design issue with integrating VBOs into my Mesh Class.
The problem is that I don't want to instantiate a new VBO for each mesh, and I want the VBO size to be determined by the number of meshes I load into it (not just a fixed size of 2MB or something).
Since there's no realloc function for VBOs, I need to batch load all my vertex data at once. This is ok, since I only have 4 or 5 small meshes. So I created a MeshList class.
I call MeshList.AddMesh(Mesh mesh) and it aggregates the vertex/index data of the mesh object and returns the offsets into the array of vertex data/index data back to the mesh that was added. This way the mesh knows where it is in the VBO (but not which VBO it's in).
However, none of the MeshList data is uploaded into a VBO until I call MeshList.BindToVBO(). But now, none of my meshes know which VBO they're in. So I was thinking of creating an array of pointers in MeshList that point to integer member variables in each Mesh class that would hold the VBO Handle. This way, when BindToVBO() is called, it iterates over the pointer array and updates the VBO Handles in the mesh objects.
I figured, this way it gives me the flexibility of having different mesh objects in different VBOs or all in one VBO. The only concern I have is whether or not this is a good design.
It's not clear to someone glancing at the code that MeshList.BindToVBO() is updating a whole bunch of mesh objects. I mean, MeshList does interact with all of the Mesh objects prior to the BindToVBO() call, but there's nothing explicitly saying that by passing a Mesh object to MeshList.AddMesh(), it's essentially subscribing it's VBOHandle members to updates at some point in the future.
I've tried to make this as clear as I can. Let me know if something needs clarification.
Honestly to me sounds like a lot of trouble for a dubious payoff. Do you have a reason to believe that putting multiple meshes in the same buffer is going to make a noticeable in your performance?
It sounds like premature optimization to me.
Sure, if you have a particle system with 50,000 particles I could see wanting that to be in a shared buffer, but in general I don't know if there's a benefit to storing two arbitrary meshes in the same buffer. It just sounds like a huge potential for bugs and headaches.