Parent mesh scaling affecting child mesh - three.js

I'm playing around with my Three.js project which is suppose to let users adjust a self-designed table.
The issue is that the Height of the table(surface) affects the height of the legs when adjusting. I want the height of the legs remain the same when the height of the surface(parent) is adjusted.
I'm using the 'Dat.gui' library for user-input adjustments
tableBoard.scale.y = this.title.TableDepth; //This affects the child
The surface is the parent and the legs are childs to the surface
Can someone point me in the direction of how i can adjust the height of my surface without affecting the children mesh?

Related

MapLibre GL JS with terrain layer: How to pin a horizontal plane to a specific altitude?

I based some code on the "Add a 3D model" example at maplibre.org in order to draw only a horizontal plane on a map which uses setTerrain to add a terrain layer.
My intention is to draw a couple of semitransparent layers at a given heights above sea level and have them intersect with mountains, somewhat similar to contour lines.
In my first test I just created a 1km-wide square at altitude 0.
I am somewhat confused by the behavior, since altitude 0 turns out to be the height of the terrain in the center of the visible map area. When I then drag the map and release the mouse, altitude 0 gets somehow reset again to the new altitude 0 of the terrain at the center, making the plane change its relative altitude.
The following animated GIF illustrates the problem:
The GIF has a mountain range to the right and the elevation is greatly exaggerated, in order to better illustrate the issue.
What do I need to do in order to be able to specify the height of the plane in meters above sea level and have it appear to be fixed at that height when dragging the map?
I think I need to get the height of the terrain at the center and then add/substract it from the plane's z-position in order for it to stay put at the height relative to given landmarks, but I have got no idea on how to do this inside of a custom layer's render function.

Change length, breadth and height of geometry on run in threejs

I am working on a configurator in Threejs for which I have a scene in which I load a model gltf, it has various elemenets in it like a real scene. For example inside office scene where we have a table, and on top of table there is notepad and pens. Apart from table there are some chairs and other material as well.
Now I want to give user a feature to change length, breadth, and height of any table or some other materials.
For this I have tried changing:
scaling of the material:
textureMaterials.scale.x = 1.5;
textureMaterials.scale.z = 1.5;
But the issue is that by changing the scale the position of the material also gets changed, and suppose even if I set the position of the material and show it at right position then the stuff which is placed on top of that table remains at its original position.
So there are a lot of things related.
Please let me know what is the good way to do this?
Thanks

Find which object3D's the camera can see in Three.js - Raycast from each camera to object

I have a grid of points (object3D's using THREE.Points) in my Three.js scene, with a model sat on top of the grid, as seen below. In code the model is called default mesh and uses a merged geometry for performance reasons:
I'm trying to work out which of the points in the grid my perspective camera can see at any given point i.e. every time the camera position is update using my orbital controls.
My first idea was to use raycasting to create a ray between the camera and each point in the grid. Then I can find which rays are being intersected with the model and remove the points corresponding to those rays from a list of all the points, thus leaving me with a list of points the camera can see.
So far so good, the ray creation and intersection code is placed in the render loop (as it has to be updated whenever the camera is moved), and therefore it's horrendously slow (obviously).
gridPointsVisible = gridPoints.geometry.vertices.slice(0);
startPoint = camera.position.clone();
//create the rays from each point in the grid to the camera position
for ( var point in gridPoints.geometry.vertices) {
direction = gridPoints.geometry.vertices[point].clone();
vector.subVectors(direction, startPoint);
ray = new THREE.Raycaster(startPoint, vector.clone().normalize());
if(ray.intersectObject( defaultMesh ).length > 0){
gridPointsVisible.pop(gridPoints.geometry.vertices[point]);
}
}
In the example model shown there are around 2300 rays being created, and the mesh has 1500 faces, so the rendering takes forever.
So I 2 questions:
Is there a better of way of finding which objects the camera can see?
If not, can I speed up my raycasting/intersection checks?
Thanks in advance!
Take a look at this example of GPU picking.
You can do something similar, especially easy since you have a finite and ordered set of spheres. The idea is that you'd use a shader to calculate (probably based on position) a flat color for each sphere, and render to an off-screen render target. You'd then parse the render target data for colors, and be able to map back to your spheres. Any colors that are visible are also visible spheres. Any leftover spheres are hidden. This method should produce results faster than raycasting.
WebGLRenderTarget lets you draw to a buffer without drawing to the canvas. You can then access the render target's image buffer pixel-by-pixel (really color-by-color in RGBA).
For the mapping, you'll parse that buffer and create a list of all the unique colors you see (all non-sphere objects should be some other flat color). Then you can loop through your points--and you should know what color each sphere should be by the same color calculation as the shader used. If a point's color is in your list of found colors, then that point is visible.
To optimize this idea, you can reduce the resolution of your render target. You may lose points only visible by slivers, but you can tweak your resolution to fit your needs. Also, if you have fewer than 256 points, you can use only red values, which reduces the number of checked values to 1 in every 4 (only check R of the RGBA pixel). If you go beyond 256, include checking green values, and so on.

Update plane texture offset from movement on a sphere

I'm working on a driving simulation in Three.js using height map data from the planet Venus.
GitHub repo here: https://github.com/hypothete/venus-walk
Here's how the simulation works so far:
In a hidden scene, a camera called the globeCamera moves at a fixed height over a sphere textured with the Venus height map. You can see this happening in the lower left viewport in my picture. The globeCamera renders its view to a WebGLRenderTarget to be used as a local height map. The result is in the second viewport in the middle left.
In the visible scene, a plane mesh called the terrainMesh has its vertices displaced up and down in correspondence with the values from the local height map. This gives the illusion that a vehicle placed in the center of the plane is moving across a surface when actually we're just updating the plane's vertices from the movement of the globeCamera.
Since I know the rotation of the globeCamera, I can pass that value to my fragment shader to rotate the terrainMesh's rock texture with the height map.
How can I offset the rock texture's position so that texture units translate with the terrain as well? I've tried tracking the globeCamera's offset as a 2D vector and adding that to the rotated UV in the fragment shader, but my results were inconsistent. Thanks for your help.

How to tell what part of a THREE.Geometry object (vertices and faces) are viewable in the camera frustum?

I'm trying to dynamically add and remove children (text labels) to a scene depending on what parts of a geometry object I can see. I have too many vertices to label (4 million) that I cannot add them all to the scene at once, hence only adding the number visible on the screen. Can somebody recommend an approach to take to achieve this?
NB. I have a large 2D plane with points across the x & y axes which I can pan and zoom in and out of. I'm trying to add 2D labels (canvas-to-texture) to those points once I've reached a certain zoom value. A glorified webgl scatterplot basically!

Resources