i've been learning how to improve the performance of my 3d apps, i've been told that i can use Object3D.clone() to duplicate identical objects, and i can use InstancedMesh when rendering numbers of objects with same geometry and material to improve performance. Could someone please tell me what's the difference between them.
Using Object3D.clone() means each cloned 3D object is rendered with an individual draw call. However, this approach might downgrade the performance of your app since the number of draw calls is an an important performance metric of 3D applications. It should be as low as possible.
Using InstancedMesh can help to decrease the number of draw calls since all its instances are render with a single draw call.
So instead of cloning objects, you should use instanced rendering whenever possible.
I compose multiple STLs for 3D printing / milling. For that I also use CSG and need some raytracing for detecting features of the models.
My scene is pretty much static. Just have to move around the models to arrange them. For this use case I'm not really sure which approach for moving / rotating the models is right.
Currently I manipulate the BufferGeometries directly. So everything in the geometry is like in the real world. Each position, each normal. No calculation from / to local or world coordinates.
On the other hand I could do the same thing with changing the meshes, which means to change just a matrix.
For me, working with the mesh is more for animation etc. While working with the geometry to manipulate the real object, which is my intention.
I'm wondering when one would translate / rotate the geometry and when the mesh. I know that manipulating the geometry is not best for CPU, which is not a problem for my use case.
Geometry can be translated so that subsequent transformations (such as scale or rotation) originate from a more preferred vector. Meshes can share a geometry. There are unique use cases for either if you care to memorize the list. Sometimes I integrate preexisting code samples. Sometimes the decision is made for me by some aspect of the process. As for the properties which may be similar, which is more convenient? I like the pattern of modifying an Object3D dummy using those methods and then updating from its matrix. There's a whole book on normals, but I didn't write it, sadly...
What’s a good way to have click targets that are larger than the actual scene object?
So far we have been using a larger invisible (yet raycastable) object to do this but it comes at the cost of requiring two draw calls instead of one.
Is there any better solutions?
So far we have been using a larger invisible (yet raycastable) object to do this but it comes at the cost of requiring two draw calls instead of one.
There is no additional draw call if you set Object3D.visible to false. However, you can still perform raycasting against invisible 3D objects. Use Raycaster.layers to selectively ignore 3D objects when performing intersection tests.
So what you are doing is already fine. You might want to consider to raycast only against bounding volumes if the raycasting performance becomes a bottleneck in your app. The idea is to create an instance of Box3 (AABB) or Sphere (bounding sphere) of your actual scene object and only use it for raycasting.
I have three layers (Object3D) each has about 20K sphere geometries. The rendering is making the whole browser stuck. Is there any way for faster rendering of these objects? As few other answers on SO, I am using same geometry and re-using three materials created only once. Also, these are dynamic objects and cannot use pre-generated json. Thanks in advance!
It’s slowing down because of the overhead involved when drawing each sphere.
Instancing here helps by reducing both the drawcall overhead and potentially removing nodes for the matrix updates.
Thee.js has a low level interface that does not work on the scene level.
You can try this 3rd part module https://www.npmjs.com/package/three-instanced-mesh
Does Three.JS have a function or capability of AI( Artificial intelligence )? Specifically let's say a FPS game. I want enemies to look for me and try to kill me, is it possible in three.js? Do they have a functionality or a system of such?
Webgl
create buffer
bind buffer
allocate data
set up state
issue draw call
run GLSL shaders
three.js
create a 3d context using WebGL
create 3 dimensional objects
create a scene graph
create primitives like spheres, cubes, toruses
move objects around, rotate them scale them
test for intersections between rays, triangles, planes, spheres, etc.
create 'materials' (rather than shaders)
javascript
write algorithms
I want enemies to look for me and try to kill me
Yes, three.js is capable of doing this, you just have to write an algorithm using three's classes. Your enemies would be 3d objects, casting rays, intersecting with other objects, etc.
You would be building a game engine, and you could use three.js as your rendering framework within that engine. Rendering is just one part of it. Think of a 2d shooter, you could make it using a 2d context, but you could also enhance it and make it 2.5d, by working with a 3d context. Everything else can stay the same.
any webgl engine that might have it ? or is it just not a webgl thing
Unity probably has everything you can possibly think of. Unity is capable of outputting WebGL, so it could be considered a 'webgl engine'.
Bablyon.js is more engine like.
Three Js is the best and most powerfull WebGL 3d engine that has no equal on the market , and its missing out on such an ability
Three.js isn't exactly a 3d engine. Wikipedia says:
Three.js is a lightweight cross-browser JavaScript library/API used to
create and display animated 3D computer graphics on a Web browser.
Three.js uses WebGL.
so if i need to just draw a car, or a spinning logo, i don't need them to come looking for me, or try to shoot me. I just need them to stay in one place, and rotate.
For a graphics demo you don't even need this - with a few draw instructions, you could render a full screen quad with a very elaborate pixel shader. Three gives you a ton of options, especially if you consider all the featured examples.
It works both ways, while you can expand three.js anyway you want, you can strip it down for just a very specific purpose.
If you need to build an app that needs to do image processing, and feature no '3d' graphics, you could still leverage webgl with three.js.
You don't need any vector, matrix, ray , geometry classes.
If you don't have vector3, you probably cant keep planeGeometry, but you would use bufferGeometry, and manually construct a plane. No transformations need to happen, so no need for matrix classes. You'd use shaders, and textures, and perhaps something like the EffectsComposer.
I’m afraid not. Three.js is just a engine for displaying 3d content.
Using it to create games only is one possibility. However few websites raise with pre-coded stuff like AI (among other things) to attract game creators, but using them is more restrictive than writing the exact code you need
Three.js itself doesn't however https://mugen87.github.io/yuka/ is a great AI engine that can work in collaboration with three to create AI.
They do a line if sight and a shooting game logic, as well as car logic which I've been playing around with recently, a React Three Fiber example here: https://codesandbox.io/s/loving-tdd-u1fs9o