three.js delay in updating local clipping planes - three.js

For realising a scrollable text container (using own bitmap fonts that are basically small sprite meshes) I am using local clipping planes.
When my text container moves the clipping planes are updated according to the global boundaries of my container.
This works perfectly except for fast movements. In this case the clipping planes are slightly delayed behind the container making the text shine through where it shouldn't.
My first thought was that the necessary code for updating the clipping planes might cause the delay.. but when I use apply this order:
1. update the text box position
2. update the clipping planes
3. render()
the delay still exists
Is the reason maybe located in the threejs framework in how the actual clipping is applied?
Here's a small code snippet that shows how I compute my upper clippin plane using two helper meshes. The one is a plane that is positioned orthogonally on my text object (red plane in the picture). The other one is a THREE.Object3D that is positioned in the middle of the upper edge for computing the right plane constant.
// get the world direction of a helper plane mesh that is located orthogonally on my text plane
var upperClippingPlaneRotationProxyMeshWordDirection = _this.upperClippingPlaneRotationProxyMesh.getWorldDirection();
// get the world position of a helper 3d object that is located in the middle of the upper edge of my text plane
var upperClippingPlanePositionProxyObjPosition = _this.upperClippingPlanePositionProxyObj.getWorldPosition();
// a plane through origin which makes it easier for computing the plane constant
var upperPlaneInOrigin = new THREE.Plane(upperClippingPlaneRotationProxyMeshWordDirection, 0);
var dist = upperPlaneInOrigin.distanceToPoint(upperClippingPlanePositionProxyObjPosition);
var upperClippingPlane = new THREE.Plane(upperClippingPlaneRotationProxyMeshWordDirection, dist*-1);
// clipping plane update
_this.myUpperClippingPlane.copy(upperClippingPlane);
picture showing the text object with clipping plane helpers

I found the reason for the delay. In my matrix updating code I only used updateMatrix() on the text object when it moves. To make sure that its child objects including the helper meshes update instantly I had to call updateMatrixWorld(true), this makes sure that the clipping planes are computed correctly

Related

Raycasting to intersect objects that have been displaced by vertex shader

Let's say I have a vertical list of meshes created from PlaneBufferGeometry with ShaderMaterial. The meshes are distributed vertically and evenly spaced.
The list will have two states:
Displaying the meshes as they are
Displaying meshes with each object's vertices transformed by the vertex shader to the same arbitrary value, let's say z = -50. This gives a zoomed out effect and the user can scroll through this list (in the code we do this by moving the camera y position)
In my app I'm trying to make my mouseover events work for the second state but it's tricky since the GPU transforms the vertices so the updated vertices are not reflected in the attributes on the JS side.
*Note I've looked into GPU picking and do not want to use it because I believe there should be a simpler way to do this without render targets
Attempted Solution
My current approach is to manually change the boundingBox of each plane when we are in the second state like so:
var box = new THREE.Box3().setFromObject(plane);
box.min.z = -50;
box.max.z = -50;
plane.geometry.boundingBox = box;
And then to change the boundingSphere's center to have the same z position of -50 after computing it.
I did this approach because I looked into the Raycaster and Mesh code for THREE.js and it seems like they check both boundingSphere and boundingBox for object intersections. So I thought if I modified both of them to reflect the transforms done by the GPU, the raycaster would work fine but it doesn't seem to be working for me.
The relevant raycaster code is here:
// mouse being vec2 of normalized coordinates and camera being a perspective camera
raycaster.setFromCamera( mouse, camera );
const intersects = raycaster.intersectObjects( planes );
Possible Theories
The only thing I can think of that's wrong about this approach is maybe I'm not projecting the mouse coords right? Since all the objects now lie on the plane z = -50 would I need to project those mouse coordinates to that plane?
Inspired by the link posted by #prisoner849 I found a working solution to just create additional transparent planes equal to the number of planes in the scene. In these planes, I set the z position to -50 and just intersect with these when in state #2.
A bit hacky, but works for now.

Reference existing WebGL depth buffer when rendering a new ThreeJS scene

I have an existing WebGL canvas that is being rendered without using ThreeJS, and is for all intents and purposes a black box to me, apart from two facts: (1) I have access to the underlying webgl canvas DOM element and can position and resize it on the screen, and (2) I know the properties of the camera for the scene, and get updates on every render cycle for that camera.
The problem I need to solve can be simplified to the following: I need to have my own separate ThreeJS canvas that displays both the black box canvas data, and then elements that I draw, like a cube for a simple example. I can already easily overlay the two canvases, set the transparency on my canvas for everything but the cube, and then align the two with the camera events from the black box library. This works quite well.
The issue with this is that when I draw my objects, like a cube, they don't respect the depth buffer of the black box canvas. So I might have a cube that is properly aligned with the backing scene and movements of the scene, but then it isn't properly masked when something in the black box canvas is closer to the camera than the cube. My thought is that I need to solve this in one of two ways: (1) I can have my renderer write to the other canvas with autoClear = false and preserveDrawingBuffer = true, or (2) I can somehow copy the depth buffer from the black box canvas into my canvas, and then set up my renderer so that it respects the new depth buffer.
I haven't been successful with either approach yet, so I'm wondering if this is possible, and if so which of the above approaches, or what other approach, can solve this problem?
--Edit--
See https://jsfiddle.net/zdxyoajb/ for angular/typescript implementation of the above attempts. In the following animate loop, if I comment out the overlayRenderer lines, the below sphere will be red and offset from the center (as it should be), but if I don't comment the lines, I get the below image. I also get the following error:
WebGL: INVALID_OPERATION: uniformMatrix4fv: location is not from current program
animate() {
requestAnimationFrame(() => this.animate());
this.blackBoxCamera.copy(this.overlayCamera);
this.blackBoxRenderer.render(this.blackBoxScene, this.blackBoxCamera);
this.overlayRenderer.state.reset();
this.overlayRenderer.render(this.overlayScene, this.overlayCamera);
}

Three.js bounding box wrong alignment

In my program, I add points to a particle system and then calculate bounding box for it as:
var object = new THREE.Mesh(geometry, new THREE.MeshBasicMaterial(0xff0000));
var box = new THREE.BoxHelper( object, 0xffff00 );
scene.add(box);
geometry is instance of BufferGeometry and it contains all the points constituting particle system.
What I see is that bounding box is wrongly aligned. Its in direction perpendicular to expected direction.
So I expect wireframe structure to envelop the point cloud.
Do I need to do something extra here?
Edit:
Code I am working upon is in github repo:
github file
In function ParticleSystemBatcher.prototype.push, points read from a file are pushed in particle system.I have added code above at end of this function. Bounding box does appear,but aligned wrongly.
You have a THREE.ShaderMaterial which applies some logic to positioning these vertices. Hence, the result rendered is different than the result stored in the main memory.
You can debug this by making a Mesh or sprite, and positioning each where you expect the particle in the system to be using just rhe scene graph (object.position.set()). The result will be a bunch of dots that are not in the same space as your particle system. These will also fit the bounding box.
The solution is to apply the same transformation that is being applied by the shader.

Find which object3D's the camera can see in Three.js - Raycast from each camera to object

I have a grid of points (object3D's using THREE.Points) in my Three.js scene, with a model sat on top of the grid, as seen below. In code the model is called default mesh and uses a merged geometry for performance reasons:
I'm trying to work out which of the points in the grid my perspective camera can see at any given point i.e. every time the camera position is update using my orbital controls.
My first idea was to use raycasting to create a ray between the camera and each point in the grid. Then I can find which rays are being intersected with the model and remove the points corresponding to those rays from a list of all the points, thus leaving me with a list of points the camera can see.
So far so good, the ray creation and intersection code is placed in the render loop (as it has to be updated whenever the camera is moved), and therefore it's horrendously slow (obviously).
gridPointsVisible = gridPoints.geometry.vertices.slice(0);
startPoint = camera.position.clone();
//create the rays from each point in the grid to the camera position
for ( var point in gridPoints.geometry.vertices) {
direction = gridPoints.geometry.vertices[point].clone();
vector.subVectors(direction, startPoint);
ray = new THREE.Raycaster(startPoint, vector.clone().normalize());
if(ray.intersectObject( defaultMesh ).length > 0){
gridPointsVisible.pop(gridPoints.geometry.vertices[point]);
}
}
In the example model shown there are around 2300 rays being created, and the mesh has 1500 faces, so the rendering takes forever.
So I 2 questions:
Is there a better of way of finding which objects the camera can see?
If not, can I speed up my raycasting/intersection checks?
Thanks in advance!
Take a look at this example of GPU picking.
You can do something similar, especially easy since you have a finite and ordered set of spheres. The idea is that you'd use a shader to calculate (probably based on position) a flat color for each sphere, and render to an off-screen render target. You'd then parse the render target data for colors, and be able to map back to your spheres. Any colors that are visible are also visible spheres. Any leftover spheres are hidden. This method should produce results faster than raycasting.
WebGLRenderTarget lets you draw to a buffer without drawing to the canvas. You can then access the render target's image buffer pixel-by-pixel (really color-by-color in RGBA).
For the mapping, you'll parse that buffer and create a list of all the unique colors you see (all non-sphere objects should be some other flat color). Then you can loop through your points--and you should know what color each sphere should be by the same color calculation as the shader used. If a point's color is in your list of found colors, then that point is visible.
To optimize this idea, you can reduce the resolution of your render target. You may lose points only visible by slivers, but you can tweak your resolution to fit your needs. Also, if you have fewer than 256 points, you can use only red values, which reduces the number of checked values to 1 in every 4 (only check R of the RGBA pixel). If you go beyond 256, include checking green values, and so on.

How to load textures to different faces of a cube in GLGE? (or at least WebGL)

I have 6 textures I would like to load on 6 different faces of a cube. I'm trying to make a new texture by using GLGE.TextureCube();. And then I load all six images to the faces the supposedly should be on the cube like so
mapTex = new GLGE.TextureCube();
mapTex.setSrcNegX("models/map/negx.jpg"); // they are all 1024x1024
mapTex.setSrcNegY("models/map/negy.jpg");
mapTex.setSrcNegZ("models/map/negz.jpg");
mapTex.setSrcPosX("models/map/posx.jpg");
mapTex.setSrcPosY("models/map/posy.jpg");
mapTex.setSrcPosZ("models/map/posz.jpg");
And then I add the texture to the Wavefront object. However, it seems only one of the 6 texture images is getting mapped and its mapped incorrectly.
My guess is that when it creates the new texture map out of the other 6, it tiles them beside each other so the new texture map's co-ordinates no longer correspond to that my obj file.
How can I properly combine 6 textures to one map to be used with GLGE? Or is there a way to manually load a texture on a face of a Mesh?
Cube maps are somewhat special, as the usual UV (ST) texture coordinates don't work for them. A cube map, the name suggests it, consists of 6 quadratic textures, arranged as the faces of a cube. The texture coordinates are not absolute positions on the cube's faces, but directions from the center of the cube away, and the position where a ray from the center in the given direction hits the cube, is the position of the texture on that particular face.
If you apply texture coordinates with the third coordinate being zero, like those in Wavefront, you will address only a slice of the cube's face, namely the part that intersects with the XY plane. If you want to see a working cubemap in action, use the object's smooth normals as texture coordinates.
You'll need to use a different texture coordinate, eg:
materialLayer.setMapinput(GLGE.MAP_OBJ)
depending on what you want try GLGE.MAP_OBJ,GLGE.MAP_NORM or GLGE.MAP_ENV

Resources