MapLibre GL JS with terrain layer: How to pin a horizontal plane to a specific altitude? - three.js

I based some code on the "Add a 3D model" example at maplibre.org in order to draw only a horizontal plane on a map which uses setTerrain to add a terrain layer.
My intention is to draw a couple of semitransparent layers at a given heights above sea level and have them intersect with mountains, somewhat similar to contour lines.
In my first test I just created a 1km-wide square at altitude 0.
I am somewhat confused by the behavior, since altitude 0 turns out to be the height of the terrain in the center of the visible map area. When I then drag the map and release the mouse, altitude 0 gets somehow reset again to the new altitude 0 of the terrain at the center, making the plane change its relative altitude.
The following animated GIF illustrates the problem:
The GIF has a mountain range to the right and the elevation is greatly exaggerated, in order to better illustrate the issue.
What do I need to do in order to be able to specify the height of the plane in meters above sea level and have it appear to be fixed at that height when dragging the map?
I think I need to get the height of the terrain at the center and then add/substract it from the plane's z-position in order for it to stay put at the height relative to given landmarks, but I have got no idea on how to do this inside of a custom layer's render function.

Related

Camera Geometry: Algorithm for "object area correction"

A project I've been working on for the past few months is calculating the top area of ​​an object taken with a 3D depth camera from top view.
workflow of my project:
capture a group of objects image(RGB,DEPTH data) from top-view
Instance Segmentation with RGB image
Calculate the real area of ​​the segmented mask with DEPTH data
Some problem on the project:
All given objects have different shapes
The side of the object, not the top, begins to be seen as it moves to the outside of the image.
Because of this, the mask area to be segmented gradually increases.
As a result, the actual area of ​​an object located outside the image is calculated to be larger than that of an object located in the center.
In the example image, object 1 is located in the middle of the angle, so only the top of the object is visible, but object 2 is located outside the angle, so part of the top is lost and the side is visible.
Because of this, the mask area to be segmented is larger for objects located on the periphery than for objects located in the center.
I only want to find the area of ​​the top of an object.
example what I want image:
Is there a way to geometrically correct the area of ​​an object located on outside of the image?
I tried to calibrate by multiplying the area calculated according to the angle formed by Vector 1 connecting the center point of the camera lens to the center point of the floor and Vector 2 connecting the center point of the lens to the center of gravity of the target object by a specific value.
However, I gave up because I couldn't logically explain how much correction was needed.
fig 3:
What I would do is convert your RGB and Depth image to 3D mesh (surface with bumps) using your camera settings (FOVs,focal length) something like this:
Align already captured rgb and depth images
and then project it onto ground plane (perpendicul to camera view direction in the middle of screen). To obtain ground plane simply take 3 3D positions of the ground p0,p1,p2 (forming triangle) and using cross product to compute the ground normal:
n = normalize(cross(p1-p0,p2-p1))
now you plane is defined by p0,n so just each 3D coordinate convert like this:
by simply adding normal vector (towards ground) multiplied by distance to ground, if I see it right something like this:
p' = p + n * dot(p-p0,n)
That should eliminate the problem with visible sides on edges of FOV however you should also take into account that by showing side some part of top is also hidden so to remedy that you might also find axis of symmetry, and use just half of top side (that is not hidden partially) and just multiply the measured half area by 2 ...
Accurate computation is virtually hopeless, because you don't see all sides.
Assuming your depth information is available as a range image, you can consider the points inside the segmentation mask of a single chicken, estimate the vertical direction at that point, rotate and project the points to obtain the silhouette.
But as a part of the surface is occluded, you may have to reconstruct it using symmetry.
There is no way to do this accurately for arbitrary objects, since there can be parts of the object that contribute to the "top area", but which the camera cannot see. Since the camera cannot see these parts, you can't tell how big they are.
Since all your objects are known to be chickens, though, you could get a pretty accurate estimate like this:
Use Principal Component Analysis to determine the orientation of each chicken.
Using many objects in many images, find a best-fit polynomial that estimates apparent chicken size by distance from the image center, and orientation relative to the distance vector.
For any given chicken, then, you can divide its apparent size by the estimated average apparent size for its distance and orientation, to get a normalized chicken size measurement.

Find which object3D's the camera can see in Three.js - Raycast from each camera to object

I have a grid of points (object3D's using THREE.Points) in my Three.js scene, with a model sat on top of the grid, as seen below. In code the model is called default mesh and uses a merged geometry for performance reasons:
I'm trying to work out which of the points in the grid my perspective camera can see at any given point i.e. every time the camera position is update using my orbital controls.
My first idea was to use raycasting to create a ray between the camera and each point in the grid. Then I can find which rays are being intersected with the model and remove the points corresponding to those rays from a list of all the points, thus leaving me with a list of points the camera can see.
So far so good, the ray creation and intersection code is placed in the render loop (as it has to be updated whenever the camera is moved), and therefore it's horrendously slow (obviously).
gridPointsVisible = gridPoints.geometry.vertices.slice(0);
startPoint = camera.position.clone();
//create the rays from each point in the grid to the camera position
for ( var point in gridPoints.geometry.vertices) {
direction = gridPoints.geometry.vertices[point].clone();
vector.subVectors(direction, startPoint);
ray = new THREE.Raycaster(startPoint, vector.clone().normalize());
if(ray.intersectObject( defaultMesh ).length > 0){
gridPointsVisible.pop(gridPoints.geometry.vertices[point]);
}
}
In the example model shown there are around 2300 rays being created, and the mesh has 1500 faces, so the rendering takes forever.
So I 2 questions:
Is there a better of way of finding which objects the camera can see?
If not, can I speed up my raycasting/intersection checks?
Thanks in advance!
Take a look at this example of GPU picking.
You can do something similar, especially easy since you have a finite and ordered set of spheres. The idea is that you'd use a shader to calculate (probably based on position) a flat color for each sphere, and render to an off-screen render target. You'd then parse the render target data for colors, and be able to map back to your spheres. Any colors that are visible are also visible spheres. Any leftover spheres are hidden. This method should produce results faster than raycasting.
WebGLRenderTarget lets you draw to a buffer without drawing to the canvas. You can then access the render target's image buffer pixel-by-pixel (really color-by-color in RGBA).
For the mapping, you'll parse that buffer and create a list of all the unique colors you see (all non-sphere objects should be some other flat color). Then you can loop through your points--and you should know what color each sphere should be by the same color calculation as the shader used. If a point's color is in your list of found colors, then that point is visible.
To optimize this idea, you can reduce the resolution of your render target. You may lose points only visible by slivers, but you can tweak your resolution to fit your needs. Also, if you have fewer than 256 points, you can use only red values, which reduces the number of checked values to 1 in every 4 (only check R of the RGBA pixel). If you go beyond 256, include checking green values, and so on.

three.js delay in updating local clipping planes

For realising a scrollable text container (using own bitmap fonts that are basically small sprite meshes) I am using local clipping planes.
When my text container moves the clipping planes are updated according to the global boundaries of my container.
This works perfectly except for fast movements. In this case the clipping planes are slightly delayed behind the container making the text shine through where it shouldn't.
My first thought was that the necessary code for updating the clipping planes might cause the delay.. but when I use apply this order:
1. update the text box position
2. update the clipping planes
3. render()
the delay still exists
Is the reason maybe located in the threejs framework in how the actual clipping is applied?
Here's a small code snippet that shows how I compute my upper clippin plane using two helper meshes. The one is a plane that is positioned orthogonally on my text object (red plane in the picture). The other one is a THREE.Object3D that is positioned in the middle of the upper edge for computing the right plane constant.
// get the world direction of a helper plane mesh that is located orthogonally on my text plane
var upperClippingPlaneRotationProxyMeshWordDirection = _this.upperClippingPlaneRotationProxyMesh.getWorldDirection();
// get the world position of a helper 3d object that is located in the middle of the upper edge of my text plane
var upperClippingPlanePositionProxyObjPosition = _this.upperClippingPlanePositionProxyObj.getWorldPosition();
// a plane through origin which makes it easier for computing the plane constant
var upperPlaneInOrigin = new THREE.Plane(upperClippingPlaneRotationProxyMeshWordDirection, 0);
var dist = upperPlaneInOrigin.distanceToPoint(upperClippingPlanePositionProxyObjPosition);
var upperClippingPlane = new THREE.Plane(upperClippingPlaneRotationProxyMeshWordDirection, dist*-1);
// clipping plane update
_this.myUpperClippingPlane.copy(upperClippingPlane);
picture showing the text object with clipping plane helpers
I found the reason for the delay. In my matrix updating code I only used updateMatrix() on the text object when it moves. To make sure that its child objects including the helper meshes update instantly I had to call updateMatrixWorld(true), this makes sure that the clipping planes are computed correctly

Update plane texture offset from movement on a sphere

I'm working on a driving simulation in Three.js using height map data from the planet Venus.
GitHub repo here: https://github.com/hypothete/venus-walk
Here's how the simulation works so far:
In a hidden scene, a camera called the globeCamera moves at a fixed height over a sphere textured with the Venus height map. You can see this happening in the lower left viewport in my picture. The globeCamera renders its view to a WebGLRenderTarget to be used as a local height map. The result is in the second viewport in the middle left.
In the visible scene, a plane mesh called the terrainMesh has its vertices displaced up and down in correspondence with the values from the local height map. This gives the illusion that a vehicle placed in the center of the plane is moving across a surface when actually we're just updating the plane's vertices from the movement of the globeCamera.
Since I know the rotation of the globeCamera, I can pass that value to my fragment shader to rotate the terrainMesh's rock texture with the height map.
How can I offset the rock texture's position so that texture units translate with the terrain as well? I've tried tracking the globeCamera's offset as a 2D vector and adding that to the rotated UV in the fragment shader, but my results were inconsistent. Thanks for your help.

Image Effect with Dark Borders

I was creating an effects library for a PhotoBooth App. I have created effects like Black/White, Vintage, Sepia, Retro etc. etc.
I wanted to create a few effects now in which I wanted to have a Dark Border at the edges which kind of form a frame for the image .. something like this -> Example Effect
How can I do this using Pixel Bender and Flash ?
The effect you are describing is called vignetting. It is basically just darkening the pixels with some weight that changes depending on distance from the center of the image. In image editing it corresponds to overlaying the image with black color and applying a circular or elliptic mask to it, for example:
(source: johnhpanos.com)
You can do this by several methods depending on how you operate with image and its pixels. For example by multiplying the pixels by a weight coefficient that is smaller when closer to the center and bigger when farther away from it. The distance can be calculated from the difference between pixel coordinates.

Resources