Three.js normals not correct - three.js

I have to script to show a normal line on a mesh. On a minimal script, everything is working fine. If I use the same model in the one of my existing application, eveything is wrong. In front of the camera, the normal detection seems correct, but as soon as I rotate the mesh, the normal detection is not good.
In both scripts, I am using raycaster to get the intersection and I use:
var p = intersects[ 0 ].point;
intersection.point.copy( p );
var n = intersects[ 0 ].face.normal.clone();
n.multiplyScalar( 100 );
n.add( intersects[ 0 ].point );
intersection.normal.copy( intersects[ 0 ].face.normal );
line.geometry.vertices[ 0 ].copy( intersection.point );
line.geometry.vertices[ 1 ].copy( n );
line.geometry.verticesNeedUpdate = true;
to get the normal.
What could cause this difference, as I am using the same mesh and the same commands for the normal in both scripts to get the normals?

I found out the origin of the problem:
in the working script, I was using Orbitcontrols to transform the mesh;
in the non-working script, I was transforming the mesh myself and it seems the normals were not updated. What happened is that the origin point was correct but the normal orientation was the same as before any transformation of the mesh. The transformations were not applied to the normals. I did not dig up further and switched to OrbitControls.

Related

Three.js: Make Raycaster sample depth buffer instead of what it does now

If I have a shader that discards (or makes otherwise transparent) portions of a mesh, this (understandably) does not affect the behavior of the raycasting. It should be possible to sample the Z buffer to obtain raycast positions, though of course we'd have other side-effects such as no longer being able to get any data about which object was "found".
Basically though if we can do a "normal" raycast, and then have the ability to do a z-buffer check, we can then start combing through the complete set of raycast intersections to find out the one that really corresponds to the thing we clicked that we're looking at...
It's unclear if it is possible to sample the Z buffer with three.js. Is it possible at all with WebGL?
No, Raycaster cannot sample the depth buffer.
However, you can use another technique referred to as "GPU-Picking".
By assigning a unique color to each object, you can figure out which object was selected. You can use a pattern like this one:
//render the picking scene off-screen
renderer.render( pickingScene, camera, pickingTexture );
//create buffer for reading single pixel
var pixelBuffer = new Uint8Array( 4 );
//read the pixel under the mouse from the texture
renderer.readRenderTargetPixels(pickingTexture, mouse.x, pickingTexture.height - mouse.y, 1, 1, pixelBuffer);
//interpret the pixel as an ID
var id = ( pixelBuffer[0] << 16 ) | ( pixelBuffer[1] << 8 ) | ( pixelBuffer[2] );
var data = pickingData[ id ];
renderer.render( scene, camera );
See these three.js examples:
http://threejs.org/examples/webgl_interactive_cubes_gpu.html
http://threejs.org/examples/webgl_interactive_instances_gpu.html
three.js r.84

Seams on cube edges when using texture atlas with three.js

I have seams between horizontal faces of the cube when use texture atlas in three.js.
This is demo: http://jsfiddle.net/rnix/gtxcj3qh/7/ or http://jsfiddle.net/gtxcj3qh/8/ (from comments)
Screenshot of the problem:
Here I use repeat and offset:
var materials = [];
var t = [];
var imgData = document.getElementById("texture_atlas").src;
for ( var i = 0; i < 6; i ++ ) {
t[i] = THREE.ImageUtils.loadTexture( imgData ); //2048x256
t[i].repeat.x = 1 / 8;
t[i].offset.x = i / 8;
//t[i].magFilter = THREE.NearestFilter;
t[i].minFilter = THREE.NearestFilter;
t[i].generateMipmaps = false;
materials.push( new THREE.MeshBasicMaterial( { map: t[i], overdraw: 0.5 } ) );
}
var skyBox = new THREE.Mesh( new THREE.CubeGeometry( 1024, 1024, 1024), new THREE.MeshFaceMaterial(materials) );
skyBox.applyMatrix( new THREE.Matrix4().makeScale( 1, 1, -1 ) );
scene.add( skyBox );
The atlas has size 2048x256 (power of two). I also tried manual UV-mapping instead of repeat, but the result is the same. I use 8 tiles instead of 6 because I have thought precision of division 1/6 causes the problem, but not.
Pixels on this line are from next tile in atlas. I tried completly white atlas and there was not any artefacts. This explains why there are not seams on vertical borders of Z-faces. I have played with filters, wrapT, wrapS and mipmaps but it does not help. Increasing resolution does not help. There is 8192x1024 atlas http://s.getid.org/jsfiddle/atlas.png I tried another atlas, the result is the same.
I know that I can split atlas into separate files and it works perfectly but it is not convenient.
Whats wrong?
I think the issue is the filtering problem with texture sheets. On image borders in a texture sheet, the gpu may pick the texel from either the correct image or the neighbor image due to limited precision. Because the colors are usually very different, this results in the visible seams. In regular textures, this is solved with CLAMP_TO_EDGE.
If you must use texture alias, then you need to fake CLAMP_TO_EDGE behavior by padding the image borders. See this answer https://gamedev.stackexchange.com/questions/61796/sprite-sheet-textures-picking-up-edges-of-adjacent-texture. It should look something like this: (exaggerated borders for clarity)
Otherwise, the simpler solution is to use a different texture for each face. Webgl supports the cube texture and that is usually used the majority of the time to implement skyboxes.
Hack the uv, replace all value 1.0 with 0.999, replace all value 0 with 0.001 will fakely resolve part of this problem.

three.js / web-vr-boilerplate / polyfill - HMD to controlled object:Axis re-mapping does not re-map also rotation order/rules

I'm using your webvr-boilerplate and trying to map it to a human face mesh.
The way I do is is:
1) attach the camera to an eye bone
main js script:
//add camera to eye
mesh.skeleton.bones[ 22 ].add(camera);
//resets camera rotation
camera.rotation.set(0,0,0);
//looks at mesh up direction to face front
camera.lookAt( mesh.up );
//moves camera to middle of eyes
camera.position.set(10,10,0);
2) change the webvr-manager.js to update the neck bone ( passed as argument on initialization ) position and rotation and in index.php I swap the axis to match the HMD ones with the ones of the bone:
webvr-manager.js:
if ( state.orientation !== null ) {
object.quaternion.copy( state.orientation );
}
if ( state.position !== null ) {
object.position.copy( state.position ).multiplyScalar( scope.scale );
}
main js script:
/* INSIDE UPDATE CYCLE */
// mesh.rotation.y+=0.1;
controls.update();
//resets bone position to default
mesh.skeleton.bones[ neckVRControlBone ].position.set(neckInitPosition.x,neckInitPosition.y,neckInitPosition.z) ;
//ROTATION SWAP
mesh.skeleton.bones[ neckVRControlBone ].rotation.x = pivot.rotation.y;
mesh.skeleton.bones[ neckVRControlBone ].rotation.y = - pivot.rotation.z;
mesh.skeleton.bones[ neckVRControlBone ].rotation.z = - tempRotation;
UPDATE 28/10/2015:
to simplify and after some extra debug realised is not a clamp problem..
The restated problem is:
To map the VR controls to an object that has a different axis configuration of the HMD/Cardboard and keep the correct rotation rules.
Example of object axis:
* x - up
* y - depth
* z - side
Swapping the rotations by just
object .rotation.x = object .rotation.z results that, after updating the controls, rotating to the side makes an undesired rotation after 45ยบ.
The rotation rules for each axis are different :
x rotates until PI and after that inverts signal and keeps changing in the same direction it was;
y rotates until PI/2 and after inverts the direction (when increasing, starts decreasing)
z is equal to x.
Changed webvr-polyfill.js and got it fixed for keyboard/mouse with this:
MouseKeyboardPositionSensorVRDevice.prototype.getState = function() {
// this.euler.set(this.phi, this.theta, 0, 'YXZ');
this.euler.set( this.theta , 0, - this.phi, 'YXZ');
But no way similar line to other controllers (HMD, cardboard, etc.).
Maybe it would be nice the rotation order and mapping could be available to the user.
Thanks
Example - try an set swappedAxis = true in the js console and rotate the neck.
The main problem you are running into is gimbal lock because you are using Euler rotations. Use Quaternions to avoid this problem.
Additionally, the axes on your mesh appear to be flipped, so you have to account for that.
Instead of setting components of the rotation, just set the quaternion:
mesh.skeleton.bones[neckVRControlBone].quaternion.set(
pivot.quaternion.y,
-pivot.quaternion.z,
-pivot.quaternion.x,
pivot.quaternion.w
);

Changing material color on a merged mesh with three js

Is that possible to interact with the buffer used when merging multiple mesh for changing color on the selected individual mesh ?
It's easy to do such thing with a collection of mesh but what about a merged mesh with multiple different material ?
#hgates, your last comment was very helpful to me, I was looking for the same thing for days !
Ok i set on each face a color, and set to true vertexColor on the
material, that solve the problem ! :)
I write here the whole concept that I used in order to add a proper answer for those who are in the same situation :
// Define a main Geometry used for the final mesh
var mainGeometry = new THREE.Geometry();
// Create a Geometry, a Material and a Mesh shared by all the shapes you want to merge together (here I did 1000 cubes)
var cubeGeometry = new THREE.CubeGeometry( 1, 1, 1 );
var cubeMaterial = new THREE.MeshBasicMaterial({vertexColors: true});
var cubeMesh = new THREE.Mesh( cubeGeometry );
var i = 0;
for ( i; i<1000; i++ ) {
// I set the color to the material for each of my cubes individually, which is just random here
cubeMaterial.color.setHex(Math.random() * 0xffffff);
// For each face of the cube, I assign the color
for ( var j = 0; j < cubeGeometry.faces.length; j ++ ) {
cubeGeometry.faces[ j ].color = cubeMaterial.color;
}
// Each cube is merged to the mainGeometry
THREE.GeometryUtils.merge(mainGeometry, cubeMesh);
}
// Then I create my final mesh, composed of the mainGeometry and the cubeMaterial
var finalMesh = new THREE.Mesh( mainGeometry, cubeMaterial );
scene.add( finalMesh );
Hope it will help as it helped me ! :)
Depends on what you mean with "changing colors". Note that after merging, the mesh is like any other non-merged mesh.
If you mean vertex colors, it would be possibly to iterate over the faces and determine the vertices which color to change based on the material index.
If you mean setting a color to the material itself, sure it's possible. Merged meshes can still have multiple materials the same way ordinary meshes do - in MeshFaceMaterial, though if you are merging yourself, you need to pass in a material index offset parameter for each geometry.
this.meshMaterials.push(new THREE.MeshBasicMaterial(
{color:0x00ff00 * Math.random(), side:THREE.DoubleSide}));
for ( var face in geometry.faces ) {
geometry.faces[face].materialIndex = this.meshMaterials.length-1;
}
var mesh = new THREE.Mesh(geometry);
THREE.GeometryUtils.merge(this.globalMesh, mesh);
var mesh = new THREE.Mesh(this.globalMesh, new THREE.MeshFaceMaterial(this.meshMaterials));
Works like a charm, for those who need example but ! This creates mutliple additional buffers (indices and vertex data) , and multiple drawElements call too :(, i inspect the draw call with webgl inpector, before adding the MeshFaceMaterial : 75 call opengl api running at 60fps easily, after : 3490 call opengl api fps drop about 20 % 45-50 fps, this means that drawElements is called for every mesh, we loose the context of merging meshes, did i miss something here ? i want to share different materials on the same buffer

Extruded spline (THREE.SceneUtils.createMultiMaterialObject) not responding to Three.Ray

I have a page using elements from the extruded spline example and the mouse tooltip example. Trying to debug the starting elements of this projects before moving on. The mouse tooltip is working on a variety of objects except for the extruded spline.
Using webGL renderer, if that matters.
Code for spline creation (not including Vector3 lines or circular extrude):
function addGeometry( geometry, color, x, y, z, rx, ry, rz, s, name ) {
var mesh = THREE.SceneUtils.createMultiMaterialObject( geometry, [
new THREE.MeshLambertMaterial( { color: color } )
] );
mesh.position.set( x, y, z );
mesh.scale.set( s, s, s );
mesh.name = name;
scene.add( mesh );
}
The intersect/Three.Ray code in update() is the same as the example linked above. I also tried adding the spline to a parent but still no changes onMouseOver. Later this week I might transition over to ThreeX DOM events and Tween :D
Mini issue which could be separate questions:
Witnessing some inaccuracy in Ray-linked OnMouseOver events on planes. It could also be the fact I'm using large distances? Planes are 1000x1000 and camera is 2000px away. I know that's ridiculous and I'm in the process of fixing that as well.
Thanks for listening!
Since you have only provided code snippets, here is a guess: Because a Multi-material object is hierarchical, you need to set the recursive flag in ray.intersectObjects() to true like so:
var intersects = ray.intersectObjects( scene.children, true );

Resources