Basically, I want to take a model, create a point cloud of the model and have the models move around. I could populate the models and move them around on the CPU, but I want to find a way to handle the animation on the GPU instead.
So I was thinking of using a THREE.Points object and associating a mesh with each point. I know you can associate a sprite with each point in a THREE.Points object, as seen in this example:
https://threejs.org/examples/?q=point#webgl_points_sprites
Is there a way to associate a mesh (namely, an imported model) with each point, so that I can animate the vertices (and thus, the models) on the GPU with a vertex shader?
Yes there is a way. You should look into the THREE.InstancedBufferGeometry
Related
I compose multiple STLs for 3D printing / milling. For that I also use CSG and need some raytracing for detecting features of the models.
My scene is pretty much static. Just have to move around the models to arrange them. For this use case I'm not really sure which approach for moving / rotating the models is right.
Currently I manipulate the BufferGeometries directly. So everything in the geometry is like in the real world. Each position, each normal. No calculation from / to local or world coordinates.
On the other hand I could do the same thing with changing the meshes, which means to change just a matrix.
For me, working with the mesh is more for animation etc. While working with the geometry to manipulate the real object, which is my intention.
I'm wondering when one would translate / rotate the geometry and when the mesh. I know that manipulating the geometry is not best for CPU, which is not a problem for my use case.
Geometry can be translated so that subsequent transformations (such as scale or rotation) originate from a more preferred vector. Meshes can share a geometry. There are unique use cases for either if you care to memorize the list. Sometimes I integrate preexisting code samples. Sometimes the decision is made for me by some aspect of the process. As for the properties which may be similar, which is more convenient? I like the pattern of modifying an Object3D dummy using those methods and then updating from its matrix. There's a whole book on normals, but I didn't write it, sadly...
What’s a good way to have click targets that are larger than the actual scene object?
So far we have been using a larger invisible (yet raycastable) object to do this but it comes at the cost of requiring two draw calls instead of one.
Is there any better solutions?
So far we have been using a larger invisible (yet raycastable) object to do this but it comes at the cost of requiring two draw calls instead of one.
There is no additional draw call if you set Object3D.visible to false. However, you can still perform raycasting against invisible 3D objects. Use Raycaster.layers to selectively ignore 3D objects when performing intersection tests.
So what you are doing is already fine. You might want to consider to raycast only against bounding volumes if the raycasting performance becomes a bottleneck in your app. The idea is to create an instance of Box3 (AABB) or Sphere (bounding sphere) of your actual scene object and only use it for raycasting.
I have a model of a human body, I am able to load that in threejs with the obj loader. Now after loading the model in threejs I need to do some post-processing like
scaling the length of arm
scaling the length of leg
Is it possible to do that, how can I do that ? I know that obj file store the necessary information to create meshes(i.e. vertices and faces) moreover material information if required. Can we add any extra information to achieve this?
You want to rig your model. You need to define the skeleton, and Three.js can then use "bones" to scale, position and stretch aspects of your mesh. A simple example of a rigged model in three.js is here:
https://threejs.org/docs/#api/objects/SkinnedMesh
Rigging your model in blender is available as a tutorial here: https://www.youtube.com/watch?v=eEqB-eKcv7k&index=15&list=PLOGomoq5sDLutXOHLlESKG2j9CCnCwVqg
I'm trying to fit the human skeleton completely inside the human body then rotate both meshes, but I'm not getting the result expected. I need your help.
The human integument 3D model was obtained from MakeHuman, I then bought a different 3D human skeleton from elsewhere to fit it inside the human integument model. The skeleton model is significantly larger than the integument model, so I used Blender to scale down the skeleton. Within Blender, the skeleton fit nicely inside the integument.
My problems start when I integrate those two models into iOS.
First problem: As both the skeleton and integument models loaded, the skeleton mesh node still appear much bigger than the human integument although it was already scaled down via Blender. I had to scale it down again using Cocos3D's uniformScale property in order to fit it inside the integument model. Note that both mesh nodes are position at the exact location distance from the camera.
Second problem: As I rotate both mesh nodes, the skeleton mesh node began surfacing and bleed through the integument mesh node. Both has the exact same rotation vector and same origin.
Help is much needed and appreciated.
Thanks to Bill Hollings, this problem is solved by adding the skeleton as the child node of the integument model.
I'm trying to make an environment map which is in the form of a cube that has images mapped onto particular faces to give the illusion of being in the area (sorta like google's street view)
I'm trying to do It in glgehowever, with my limited experience, I only know how to map one texture to a whole mesh (Which is what I'm doing at the moment). If I were to create 6 different textures, would it possible for me to specify the faces that those textures should be loaded to?
You could generate the six faces of the cube as separate objects and use a different texture for each. Alternative is to set different texture coordinates for the different faces of the cube.
If you want ready-to-run code, three.js has a couple of skybox examples. E.g. http://mrdoob.github.com/three.js/examples/webgl_panorama_equirectangular.html
You should look at "UV Mapping". Check this example. Roughly, UVs describe how the polygons are mapped (in x,y) on the texture.
Sounds like you want a cube map texture — it takes six separate images, and you lookup in it with a direction vector rather than (u,v) coordinates. They are the usual way to do environments. Cube map textures are available in WebGL.