Creating lines that can be modified from ends - three.js

So I'd like to create some lines that can be modified from points that are connecting them.
An example of initial state
First one has been moved down, second one up and third one right and down.
On the implementation side I currently have two meshes. First one is stretched out so that it would cover the distance from its starting point to the next point and second one marks the starting point.
var meshLine = new THREE.Mesh(boxGeometry, material);
meshLine.position.set(x,y,z);
meshLine.scale(1,1,distancetonextpoint);
var meshPoint = new THREE.Mesh(sphereGeometry, material);
meshPoint.position.set(x,y,z);
meshPoint.scale(2,2,2);
What I want from it is that when the user drags the circular point other lines would stretch or change their position accordingly to the one being dragged.
Is there some more reasonable solution for this as I feel mine is not quite good and clean. I'd have to do quite heavy lifting to get the movement done.
I've also looked at this example which looks visually very nice but could not integrate it to my system.

You mean you need to edit the object's geometry by dragging their vertices (here a line).
Objects'vertices can't be dragged theirselves, so you need to loop through the geometry and create little spheres at each vertex position ;
You set a raycaster to pick those spheres, as in the examples ;
Your screen is 2D so to drag objects in 3D you need a surface perpendicular to the screen, that intersects the sphere position. For this you set an invisible plane at the vertex position and make it look at the camera ;
Once you can correctly drag the spheres, you tell the corresponding vertices on the object (your lines) they must keep the same position as their spheres ;
End with geometry.verticesNeedUpdate=true.
And you have your new geometry
For code details on picking objects look at the official picking objects'example draggable cubes
This example shows how to use it for editing objects
Comment if you need more explanations

Related

Dragging 3D objects on walls using mouse in three.js

I am trying to achieve movement of an object on walls, instead of only one plane. In this example, an object dragged on walls by using:
intersects = raycaster.intersectObjects([walls]);
object.position.copy(intersects[0].point);
However, with this method, the object jumps because the object's center moves to the mouse. There is a related question and helpful JSFiddle for dragging on one plane without jumpin Would you please help me to modify it for multiples planes (walls)? Thanks
After reading your comment, I think what you're looking for is you want the object to animate to the position. You can do this a few ways. The "threejs" way is to move it each frame (within the animate/update loop). You could do this with Vector3.lerp by storing intersects[0].point as your target location and lerping your object.position to it each frame. Another option is to use an animation library like animejs or gsap.

Find which object3D's the camera can see in Three.js - Raycast from each camera to object

I have a grid of points (object3D's using THREE.Points) in my Three.js scene, with a model sat on top of the grid, as seen below. In code the model is called default mesh and uses a merged geometry for performance reasons:
I'm trying to work out which of the points in the grid my perspective camera can see at any given point i.e. every time the camera position is update using my orbital controls.
My first idea was to use raycasting to create a ray between the camera and each point in the grid. Then I can find which rays are being intersected with the model and remove the points corresponding to those rays from a list of all the points, thus leaving me with a list of points the camera can see.
So far so good, the ray creation and intersection code is placed in the render loop (as it has to be updated whenever the camera is moved), and therefore it's horrendously slow (obviously).
gridPointsVisible = gridPoints.geometry.vertices.slice(0);
startPoint = camera.position.clone();
//create the rays from each point in the grid to the camera position
for ( var point in gridPoints.geometry.vertices) {
direction = gridPoints.geometry.vertices[point].clone();
vector.subVectors(direction, startPoint);
ray = new THREE.Raycaster(startPoint, vector.clone().normalize());
if(ray.intersectObject( defaultMesh ).length > 0){
gridPointsVisible.pop(gridPoints.geometry.vertices[point]);
}
}
In the example model shown there are around 2300 rays being created, and the mesh has 1500 faces, so the rendering takes forever.
So I 2 questions:
Is there a better of way of finding which objects the camera can see?
If not, can I speed up my raycasting/intersection checks?
Thanks in advance!
Take a look at this example of GPU picking.
You can do something similar, especially easy since you have a finite and ordered set of spheres. The idea is that you'd use a shader to calculate (probably based on position) a flat color for each sphere, and render to an off-screen render target. You'd then parse the render target data for colors, and be able to map back to your spheres. Any colors that are visible are also visible spheres. Any leftover spheres are hidden. This method should produce results faster than raycasting.
WebGLRenderTarget lets you draw to a buffer without drawing to the canvas. You can then access the render target's image buffer pixel-by-pixel (really color-by-color in RGBA).
For the mapping, you'll parse that buffer and create a list of all the unique colors you see (all non-sphere objects should be some other flat color). Then you can loop through your points--and you should know what color each sphere should be by the same color calculation as the shader used. If a point's color is in your list of found colors, then that point is visible.
To optimize this idea, you can reduce the resolution of your render target. You may lose points only visible by slivers, but you can tweak your resolution to fit your needs. Also, if you have fewer than 256 points, you can use only red values, which reduces the number of checked values to 1 in every 4 (only check R of the RGBA pixel). If you go beyond 256, include checking green values, and so on.

Looking for some pointers for voxelisation strategies

To voxelise a mesh basically means to be able to determine for a point(x,y,z) if it is either inside or outside a mesh.
A mesh here is just a raw set of triangles. Being outside the mesh means that there is a ray from the point (with any direction) that does not intersect the mesh from any viewpoint.
For a well behaved, closed, non intersecting mesh, that is easy: Trace a ray in any direction, if the number of intersections is odd, the point is inside.
But for a "bad" mesh, composed of open parts this is terrible. For example the mesh could be two cubes connected by an open cylinder that sticks into both of them.
ZBuffer rendering solves this fine for one viewpoint. But the issue is for any viewpoint. To me the problem is well defined, but not obvious to solve.
I am looking for some pointers on how to approach this problem. There must be tons of research for it. Any links to papers? Or am I missing something in how to think about the problem?
It's possible if all of the surfaces on your meshes have a "sidedness" to them, i.e. they have a front side and a back side.
Then to determine if a point is inside the mesh you can trace a ray from the point in any direction, and keep a count of intersections like this:
if the ray goes through the back side and out the front side (i.e. emerging from the inside to the outside), then add one.
if the ray goes through the front side and out the back side, (i.e. entering the inside from the outside), then subtract one.
if the mesh surface is double sided, do not add or subtract anything.
If the final count is positive, or if the point lies exactly on any surface, then the point is inside the mesh.
You need to add the restriction that it's never possible to see an exposed back side of a surface from outside the mesh. This is equivalent to saying that the mesh should always render correctly from all exterior viewpoints with back-face culling turned on.
For the example with the cube and open cylinder, for the cube to be solid, its surfaces should be single sided (making them double-sided would mean defining a hollow cube with infinitely thin walls).
The mesh surface of the cylinder would also have to be single-sided, or else it also would have infinitely thin walls, and points inside the cylinder (that aren't inside the cubes) would not be inside the mesh. As long as the ends of the cylinder are stuck inside the cube, the restriction is not broken since you can never see an exposed back side of a face.
If one of the cubes is removed, then the restriction is not met and this algorithm will fail.

Three JS How to make ray or rays from camera to all object in rederer to check faceIndex

I have some project for child http://kinosura.kiev.ua/sova/ and i need to check faceIndex of all cubes in screen.
Now i use intersections array from mouse, but is working only when user pointer at the cube.
How to make ray or rays from camera to all object to check faceIndex ?
I try to make four rays to cubes but if i set cube.position as origin of like this:
raycaster.setFromCamera( cube1.positoin , camera )
I get empty array of intersections.
I also try to set static 2d vector as origin (get coordinate from mouse) but i have relative renderer size and this coordinate all time change... its not work(
Thanks for answer anyway.
I suggest that you try another approach It appears that your cubes do not cover one another, relative to the camera view. So use the surface normals, and compare them to the view direction to determine if they are facing the camera or facing away from the camera by a simple one-per-polygon dot product.
When you are creating your geometry, before adding it a THREE.Mesh call .generateFaceNormals() on it.
Instead of ray casting, iterate through all faces, grab the surface normal of the face, transform relative to the view (inverse transpose of the object's matrix), then dot(). might sound complicated, at first, but it's actually just a couple of steps and much faster than doing a lot of raycasts (which will probably include this anyway!)

How to create invisible mesh? [duplicate]

This question already has answers here:
Show children of invisible parents
(2 answers)
Closed 7 years ago.
I'd like to have a collection of objects that appear to be floating free in space but are actually connected to each other so that they move and rotate as one. Can I put them inside a larger mesh that is itself completely invisible, that I can apply transformations to? I tried setting transparency: true on the MeshNormalMaterial constructor, but that didn't seem to have any effect.
As a simple representative example: say I want to render just one pair of opposite corner cubies in a Rubik's Cube, but leave the rest of the Cube invisible. I can rotate the entire cube and watch the effect on the smaller cubes as they move together, or I can rotate them in place and break the illusion that they're part of a larger object.
In this case, I imagine I would create three meshes using BoxGeometry or CubeGeometry, one with a side length triple that of the other two. I would add the two smaller meshes to the larger one, and add the larger one to the scene. But when I try that, I get one big cube and can't see the smaller ones inside it. If I set visible to false on the larger mesh, the smaller meshes disappear along with it, even if I explicitly set visible to true on them.
Group them inside an Object3D.
var parent = new THREE.Object3d();
parent.add( child1 ); // etc
parent.rotation.x = 2; // rotates group

Resources