Three.js: Make Raycaster sample depth buffer instead of what it does now - three.js

If I have a shader that discards (or makes otherwise transparent) portions of a mesh, this (understandably) does not affect the behavior of the raycasting. It should be possible to sample the Z buffer to obtain raycast positions, though of course we'd have other side-effects such as no longer being able to get any data about which object was "found".
Basically though if we can do a "normal" raycast, and then have the ability to do a z-buffer check, we can then start combing through the complete set of raycast intersections to find out the one that really corresponds to the thing we clicked that we're looking at...
It's unclear if it is possible to sample the Z buffer with three.js. Is it possible at all with WebGL?

No, Raycaster cannot sample the depth buffer.
However, you can use another technique referred to as "GPU-Picking".
By assigning a unique color to each object, you can figure out which object was selected. You can use a pattern like this one:
//render the picking scene off-screen
renderer.render( pickingScene, camera, pickingTexture );
//create buffer for reading single pixel
var pixelBuffer = new Uint8Array( 4 );
//read the pixel under the mouse from the texture
renderer.readRenderTargetPixels(pickingTexture, mouse.x, pickingTexture.height - mouse.y, 1, 1, pixelBuffer);
//interpret the pixel as an ID
var id = ( pixelBuffer[0] << 16 ) | ( pixelBuffer[1] << 8 ) | ( pixelBuffer[2] );
var data = pickingData[ id ];
renderer.render( scene, camera );
See these three.js examples:
http://threejs.org/examples/webgl_interactive_cubes_gpu.html
http://threejs.org/examples/webgl_interactive_instances_gpu.html
three.js r.84

Related

Map Texture to Plane without distortion

In Three.js, how can I change the way in which a texture gets mapped onto a plane?
Let's assume we have a 1x1 plane and a 16:9 image. How can I control the way in which that image gets mapped onto the plane?
By default, the image gets "squished". I would like it to maintain its aspect ratio and have any overlap get "cut off". Is there a way to configure the material or texture to do this, or would I use a shader? If so, what would it need to look like?
const planeMesh = new THREE.Mesh(
new THREE.PlaneBufferGeometry(1, 1),
new THREE.MeshBasicMaterial({
map: texture,
})
);
PS: In future, I would also like to be able to zoom into and out of the image on mouse hover without affecting the size of the plane, so would think a shader might be better?
A Texture already has several properties built-in that can do what you're looking for.
const texture = textureLoader.load("whatever.png");
const planeMesh = new THREE.Mesh(
new THREE.PlaneBufferGeometry(1, 1),
new THREE.MeshBasicMaterial({
map: texture,
})
);
// Sets the pivot point to the center of the texture
texture.center.set(0.5, 0.5);
// Make the texture repeat 0.5625 times in the x-axis to match 16:9 ratio
let ratio = 9 / 16;
texture.repeat.set(ratio, 1);
// Scale texture up to "zoom" into it
let zoom = 0.5;
texture.repeat.set(ratio * zoom, 1 * zoom);
You can read more about the .repeat .center and even .rotation properties in the Texture docs. Just keep in mind that repeating a texture is a bit counter-intuitive because you're doing the inverse of scaling a texture. So to scale a texture by 2, you have to tell it to repeat 1/2 times.

How to setup a camera that follows a circle path?

I'm trying to create a camera that follows an object that rotates on a orbit around a sphere. But everytime the camera reaches the polar coordinates of the orbit, the direction changes. I just set the position of the camera according to the object that is has to follow and calling lookAt afterwards:
function render() {
rotation += 0.002;
// set the marker position
pt = path.getPoint( t );
// set the marker position
marker.position.set( pt.x, pt.y, pt.z );
marker.lookAt( new THREE.Vector3(0,0,0) );
// rotate the mesh that illustrates the orbit
mesh.rotation.y = rotation
// set the camera position
var cameraPt = cameraPath.getPoint( t );
camera.position.set( cameraPt.x, cameraPt.y, cameraPt.z );
camera.lookAt( marker.position );
t = (t >= 1) ? 0 : t += 0.002;
renderer.render( scene, camera );
}
Here's a complete fiddle: http://jsfiddle.net/krw8nwLn/69/
I've created another fiddle with a second cube which represents the desired camera behaviour: http://jsfiddle.net/krw8nwLn/70/
What happens is that the camera's lookAt function will always try to align the camera with the horizontal plane (so that the "up" direction is always (0, 1, 0). And when you reach the top and bottom of the ellipse path, the camera will instantaneously rotate 180° so that up is still up. You can also see this in your "desired behaviour" example, as the camera cube rotates so that the colors on the other side are shown.
A solution is to not use lookAt for this case, because it does not support cameras doing flips like this. Instead set the camera's rotation vector directly. (Which requires some math, but you look like a math guy.)

Seams on cube edges when using texture atlas with three.js

I have seams between horizontal faces of the cube when use texture atlas in three.js.
This is demo: http://jsfiddle.net/rnix/gtxcj3qh/7/ or http://jsfiddle.net/gtxcj3qh/8/ (from comments)
Screenshot of the problem:
Here I use repeat and offset:
var materials = [];
var t = [];
var imgData = document.getElementById("texture_atlas").src;
for ( var i = 0; i < 6; i ++ ) {
t[i] = THREE.ImageUtils.loadTexture( imgData ); //2048x256
t[i].repeat.x = 1 / 8;
t[i].offset.x = i / 8;
//t[i].magFilter = THREE.NearestFilter;
t[i].minFilter = THREE.NearestFilter;
t[i].generateMipmaps = false;
materials.push( new THREE.MeshBasicMaterial( { map: t[i], overdraw: 0.5 } ) );
}
var skyBox = new THREE.Mesh( new THREE.CubeGeometry( 1024, 1024, 1024), new THREE.MeshFaceMaterial(materials) );
skyBox.applyMatrix( new THREE.Matrix4().makeScale( 1, 1, -1 ) );
scene.add( skyBox );
The atlas has size 2048x256 (power of two). I also tried manual UV-mapping instead of repeat, but the result is the same. I use 8 tiles instead of 6 because I have thought precision of division 1/6 causes the problem, but not.
Pixels on this line are from next tile in atlas. I tried completly white atlas and there was not any artefacts. This explains why there are not seams on vertical borders of Z-faces. I have played with filters, wrapT, wrapS and mipmaps but it does not help. Increasing resolution does not help. There is 8192x1024 atlas http://s.getid.org/jsfiddle/atlas.png I tried another atlas, the result is the same.
I know that I can split atlas into separate files and it works perfectly but it is not convenient.
Whats wrong?
I think the issue is the filtering problem with texture sheets. On image borders in a texture sheet, the gpu may pick the texel from either the correct image or the neighbor image due to limited precision. Because the colors are usually very different, this results in the visible seams. In regular textures, this is solved with CLAMP_TO_EDGE.
If you must use texture alias, then you need to fake CLAMP_TO_EDGE behavior by padding the image borders. See this answer https://gamedev.stackexchange.com/questions/61796/sprite-sheet-textures-picking-up-edges-of-adjacent-texture. It should look something like this: (exaggerated borders for clarity)
Otherwise, the simpler solution is to use a different texture for each face. Webgl supports the cube texture and that is usually used the majority of the time to implement skyboxes.
Hack the uv, replace all value 1.0 with 0.999, replace all value 0 with 0.001 will fakely resolve part of this problem.

Unexpected mesh results from ThreeCSG boolean operation

I am creating a scene & have used a boolean function to cut out holes in my wall. However the lighting reveals that the resultant shapes have messed up faces. I want the surface to look like one solid piece, rather than fragmented and displaying lighting backwards. Does anyone know what could be going wrong with my geometry?
The code that booleans objects is as follows:
//boolean subtract two shapes, convert meshes to bsps, subtract, then convert back to mesh
var booleanSubtract = function (Mesh1, Mesh2, material) {
//Mesh1 conversion
var mesh1BSP = new ThreeBSP( Mesh1 );
//Mesh2 conversion
var mesh2BSP = new ThreeBSP( Mesh2 );
var subtract_bsp = mesh1BSP.subtract( mesh2BSP );
var result = subtract_bsp.toMesh( material );
result.geometry.computeVertexNormals();
return result;
};
I have two lights in the scene:
var light = new THREE.DirectionalLight( 0xffffff, 0.75 );
light.position.set( 0, 0, 1 );
scene.add( light );
//create a point light
var pointLight = new THREE.PointLight(0xFFFFFF);
// set its position
pointLight.position.x = 10;
pointLight.position.y = 50;
pointLight.position.z = 130;
// add to the scene
scene.add(pointLight);
EDIT: Using WestLangley's suggestion, I was able to partially fix the wall rendering. And by using material.wireframe=true; I can see that after the boolean operation my wall faces are not merged. Is there a way to merge them?
Your problems are due to two issues.
First, you should be using FlatShading.
Second, as explained in this stackoverflow post, MeshLambert material only calculates the lighting at each vertex, and interpolates the color across each face. MeshPhongMaterial calculates the color at each texel.
You need to use MeshPhongMaterial to avoid the lighting artifacts you are seeing.
three.js r.68

Changing material color on a merged mesh with three js

Is that possible to interact with the buffer used when merging multiple mesh for changing color on the selected individual mesh ?
It's easy to do such thing with a collection of mesh but what about a merged mesh with multiple different material ?
#hgates, your last comment was very helpful to me, I was looking for the same thing for days !
Ok i set on each face a color, and set to true vertexColor on the
material, that solve the problem ! :)
I write here the whole concept that I used in order to add a proper answer for those who are in the same situation :
// Define a main Geometry used for the final mesh
var mainGeometry = new THREE.Geometry();
// Create a Geometry, a Material and a Mesh shared by all the shapes you want to merge together (here I did 1000 cubes)
var cubeGeometry = new THREE.CubeGeometry( 1, 1, 1 );
var cubeMaterial = new THREE.MeshBasicMaterial({vertexColors: true});
var cubeMesh = new THREE.Mesh( cubeGeometry );
var i = 0;
for ( i; i<1000; i++ ) {
// I set the color to the material for each of my cubes individually, which is just random here
cubeMaterial.color.setHex(Math.random() * 0xffffff);
// For each face of the cube, I assign the color
for ( var j = 0; j < cubeGeometry.faces.length; j ++ ) {
cubeGeometry.faces[ j ].color = cubeMaterial.color;
}
// Each cube is merged to the mainGeometry
THREE.GeometryUtils.merge(mainGeometry, cubeMesh);
}
// Then I create my final mesh, composed of the mainGeometry and the cubeMaterial
var finalMesh = new THREE.Mesh( mainGeometry, cubeMaterial );
scene.add( finalMesh );
Hope it will help as it helped me ! :)
Depends on what you mean with "changing colors". Note that after merging, the mesh is like any other non-merged mesh.
If you mean vertex colors, it would be possibly to iterate over the faces and determine the vertices which color to change based on the material index.
If you mean setting a color to the material itself, sure it's possible. Merged meshes can still have multiple materials the same way ordinary meshes do - in MeshFaceMaterial, though if you are merging yourself, you need to pass in a material index offset parameter for each geometry.
this.meshMaterials.push(new THREE.MeshBasicMaterial(
{color:0x00ff00 * Math.random(), side:THREE.DoubleSide}));
for ( var face in geometry.faces ) {
geometry.faces[face].materialIndex = this.meshMaterials.length-1;
}
var mesh = new THREE.Mesh(geometry);
THREE.GeometryUtils.merge(this.globalMesh, mesh);
var mesh = new THREE.Mesh(this.globalMesh, new THREE.MeshFaceMaterial(this.meshMaterials));
Works like a charm, for those who need example but ! This creates mutliple additional buffers (indices and vertex data) , and multiple drawElements call too :(, i inspect the draw call with webgl inpector, before adding the MeshFaceMaterial : 75 call opengl api running at 60fps easily, after : 3490 call opengl api fps drop about 20 % 45-50 fps, this means that drawElements is called for every mesh, we loose the context of merging meshes, did i miss something here ? i want to share different materials on the same buffer

Resources