Being new to three.js I've dived into editing lines and vertices to see what's possible. Right now I'm creating my own empty geometry (vertObj), and adding a few vertices to it initially.
When it's rendered, I can see the vertices.
I would like to keep adding vertices over time (via vertObj.vertices.push) and to make the modification possible in render loop, I've set
vertObj.dynamic = true;
vertObj.verticesNeedUpdate = true;
however this does not change anything, I do not see the newly added points being rendered.
Any ideas?
Related
In the program I am writing, I have a large asteroid field (implemented using a PointCloud): the problem I run into sometimes is that when the camera moves, the asteroids sometimes disappear as soon as they touch the edge of the screen. If the camera moves gradually, they get closer and closer to the edge and then suddenly pop! - they're gone, even though there should still be a piece of them in view. The problem isn't as obvious if the camera is moving quickly, but you can still spot it if you look closely. How do I fix that?
Here is a link to a JS fiddle with the code I'm using to create the asteroid field (you won't be able to test it, but you can look at it):
https://jsfiddle.net/yazwz464/
as gaitat said in his comment the points are culled by camera frustum most likely because your points appear larger than their geometry is
try setting Object3D.frustumCulled = false for the objects
I am updating / modifying locations of points / vertices and was running into this issue also. In my case, I needed to both update the vertices AND compute the bounding sphere.
geometry.verticesNeedUpdate = true;
geometry.computeBoundingSphere();
So I'd like to create some lines that can be modified from points that are connecting them.
An example of initial state
First one has been moved down, second one up and third one right and down.
On the implementation side I currently have two meshes. First one is stretched out so that it would cover the distance from its starting point to the next point and second one marks the starting point.
var meshLine = new THREE.Mesh(boxGeometry, material);
meshLine.position.set(x,y,z);
meshLine.scale(1,1,distancetonextpoint);
var meshPoint = new THREE.Mesh(sphereGeometry, material);
meshPoint.position.set(x,y,z);
meshPoint.scale(2,2,2);
What I want from it is that when the user drags the circular point other lines would stretch or change their position accordingly to the one being dragged.
Is there some more reasonable solution for this as I feel mine is not quite good and clean. I'd have to do quite heavy lifting to get the movement done.
I've also looked at this example which looks visually very nice but could not integrate it to my system.
You mean you need to edit the object's geometry by dragging their vertices (here a line).
Objects'vertices can't be dragged theirselves, so you need to loop through the geometry and create little spheres at each vertex position ;
You set a raycaster to pick those spheres, as in the examples ;
Your screen is 2D so to drag objects in 3D you need a surface perpendicular to the screen, that intersects the sphere position. For this you set an invisible plane at the vertex position and make it look at the camera ;
Once you can correctly drag the spheres, you tell the corresponding vertices on the object (your lines) they must keep the same position as their spheres ;
End with geometry.verticesNeedUpdate=true.
And you have your new geometry
For code details on picking objects look at the official picking objects'example draggable cubes
This example shows how to use it for editing objects
Comment if you need more explanations
This question already has answers here:
Show children of invisible parents
(2 answers)
Closed 7 years ago.
I'd like to have a collection of objects that appear to be floating free in space but are actually connected to each other so that they move and rotate as one. Can I put them inside a larger mesh that is itself completely invisible, that I can apply transformations to? I tried setting transparency: true on the MeshNormalMaterial constructor, but that didn't seem to have any effect.
As a simple representative example: say I want to render just one pair of opposite corner cubies in a Rubik's Cube, but leave the rest of the Cube invisible. I can rotate the entire cube and watch the effect on the smaller cubes as they move together, or I can rotate them in place and break the illusion that they're part of a larger object.
In this case, I imagine I would create three meshes using BoxGeometry or CubeGeometry, one with a side length triple that of the other two. I would add the two smaller meshes to the larger one, and add the larger one to the scene. But when I try that, I get one big cube and can't see the smaller ones inside it. If I set visible to false on the larger mesh, the smaller meshes disappear along with it, even if I explicitly set visible to true on them.
Group them inside an Object3D.
var parent = new THREE.Object3d();
parent.add( child1 ); // etc
parent.rotation.x = 2; // rotates group
The following illustrates the rendering order I would like to obtain for two plane geometries:
http://jsfiddle.net/Axy2F/8/
This works fine under r58 but under r61 the red square is obscured regardless of how I structure the scene graph. I'm unclear whether this is a bug in r61, or whether I was doing things incorrectly in r58, in a way that just happened to work.
Am I right in assuming that behind.add(child) should suffice to have the red square "beneath" the indigo one in the scene graph, and therefore rendered on top of it?
If not, what is the correct way to establish the rendering order by controlling the construction of the scene graph (that works with r61)? I would like to avoid setting renderDepth explicitly. Note that setting rendered.sortObjects to false does not help.
The object that is in front is the object that is closest to the camera. Being a child has nothing to do with it.
Both your objects have position ( 0, 0, 0 ), so they are the same distance from the camera.
This will lead to z-fighting, which is worse with CanvasRenderer than it is with WebGLRenderer.
Change the position of the child to render it in front. For example,
child.position.z = 1;
FYI, r.61 has a different tie-breaker rule than r.58 did. This is why the rendering is different in r.61.
My problem seems to be really simple but I just can't get the reason behind it:
I have a vertex and an index buffer that get filled in with glBufferSubData. There are a couple of meshes that get filled in one-by-one into this big VBO and its corresponding IBO
Then I try to render those small meshes with glDrawElements one-by-one
Problem is, only first mesh gets rendered - and multiple times - in places where each of those different meshes should be!!!
Following info may be useful:
I create VBO this way
gl.glGenBuffers(1, buffers_, 0);
gl.glBindBuffer(GL11.GL_ARRAY_BUFFER, buffers_[0]);
gl.glBufferData(GL11.GL_ARRAY_BUFFER, sizeInBytes, null, GL11.GL_DYNAMIC_DRAW);
Then each mesh is filled into the VBO like this
gl.glBindBuffer(GL11.GL_ARRAY_BUFFER, name);
gl.glBufferSubData(GL11.GL_ARRAY_BUFFER, startOffsetInBytes, numBytesToCopy, nioBuffer);
And meshes are rendered in this fasion
bind VBO/IBO and set appropriate client states
then set vertex, normal, and texcoord "pointers" - they point at the beginning of VBO plus their offsets in vertex "structure"
and call gl.glDrawElements(GL10.GL_TRIANGLES, indicesNum, GL10.GL_UNSIGNED_SHORT, startIndexOffsetInBytes);
then finally, unbind VBO/IBO and disable client states
I debugged the code and I'm sure that sizeInBytes, startOffsetInBytes, numBytesToCopy and startIndexOffsetInBytes are correct values (in bytes:))) and indicesNum is the number of indices/vertices in the mesh (to render).
One suspicious place is setting vertex/normal/texcoord pointers - they get set only once - and set to the beginning of the VBO. Maybe I need to set them each time before calling glDrawElements?
:D Found that reason. Everything I described is correct indeed. Problem is in the way meshes are being added into the VBO/IBO - indices for each new mesh were restarted from 0!! So only the first mesh in VBO was getting rendered.