Three.js Raycaster intersectObjects too slow? - three.js

I'm writing a Three.js prototype for interacting with objects using the Leap Motion. Each frame (or regularly anyway), I want to check if the representation of the user's finger is above or beneath an object in the scene.
I've done this with the code below, but the intersectObject call is taking about 200 milliseconds, even though it's just testing one object. This is causing the animation to slow down and become very jerky (I've tried doing it e.g. once every 20 frames instead of every frame, but then it still jerks every 20 frames).
Is there a way to do this quicker? Am I doing something wrong? How do other people deal with this?
Thanks!
Code:
...
var filepath = '../models/Scissors.js';
loader.load(filepath, function(geometry, materials) {
scissors = new THREE.Mesh( geometry, new THREE.MeshFaceMaterial(materials) );
scene.add( scissors );
});
...
function update() {
...
// NB. Sphere1 has been positioned to represent the user's index finger
// in 3D space
var vector = sphere1.position.subSelf( camera.position );
var ray = new THREE.Raycaster( camera.position, vector.clone().normalize() );
var start = new Date().getTime();
var collisions = ray.intersectObjects( [scissors] );
// Takes about 200ms
console.log('Took ' + (new Date().getTime() - start) + ' ms' );
if( collisions.length > 0 ) {
console.log('HIT!');
}
...
requestAnimFrame(update);
}

Silly me, of course the reason it's slow is because the scissors object is a non-trivial model. Now I'm containing it within an invisible cube and testing against that instead. And it's super fast now (0-1 milliseconds) :-)

Related

Set target of directional light in THREE.js

I have model far away from the origin, and I want a directional light to hit the model like sunlight would do.
I set a position and a target for my DirectionalLight:
export const dirLight = getDirectional();
function getDirectional() {
const dirLight = new DirectionalLight( 0xffffff, 1 );
dirLight.position.set( 585000 + 10000, 6135000 + 10000, -500 + 5000);
return dirLight;
};
const helper = new THREE.DirectionalLightHelper( dirLight, 1000 );
let t = new THREE.Object3D();
t.translateX(585000);
t.translateY(6135000);
t.translateZ(1000);
dirLight.target = t;
scene.add(dirLight);
scene.add(dirLight.target);
scene.add(t);
helper.update();
scene.add( helper );
I would expect the light direction now to be parallel to vector between light position and light target, but apparently the light direction is still towards the origin of the scene. What am I doing wrong ?
A running example can be seen here
The documentation states that the target needs to be added to the scene so that the world coordinates are calculated. However, that does not seem to work.
So, instead I tried manually updating the world coordinates, and that worked. Probably that will only work with a static target.
In your case that would be adding
dirLight.target.updateMatrixWorld();

Make environment map scale when moving from the object

I use CubeCamera to build a simple reflection model. The setup can be seen on the picture below.
If the camera is close enough to the cube - the reflection looks fine. However, if i move away from the objects - the reflection just gets bigger. See the picture below.
This is not the way i want it. I'd like the reflection to proportionally get smaller. I tried to play with different settings, then I thought this could be achieved using a proper shader program (just squish the cube texture, kind of), so i've tried to mess with the existing PhongShader, but no luck there, i'm too newbie to this.
Also, i've noticed that if i change the width and height of the cubeCamera.renderTarget, i.e.
cubeCamera.renderTarget.width = cubeCamera.renderTarget.height = 150;
i can get the proper dimensions of the reflection, but its position on the surface is wrong. It's visible from the angle presented on the picture below, but not visible if i place the camera straight. Looks like the texture needs to be centered.
The actual code is pretty straightforward:
var cubeCamera = new THREE.CubeCamera(1, 520, 512);
cubeCamera.position.set(0, 1, 0);
cubeCamera.renderTarget.format = THREE.RGBAFormat;
scene.add(cubeCamera);
var reflectorObj = new THREE.Mesh(
new THREE.CubeGeometry(20, 20, 20),
new THREE.MeshPhongMaterial({
envMap: cubeCamera.renderTarget,
reflectivity: 0.3
})
);
reflectorObj.position.set(0, 0, 0);
scene.add(reflectorObj);
var reflectionObj = new THREE.Mesh(
new THREE.SphereGeometry(5),
new THREE.MeshBasicMaterial({
color: 0x00ff00
})
);
reflectionObj.position.set(0, -5, 20);
scene.add(reflectionObj);
function animate () {
reflectorObj.visible = false;
cubeCamera.updateCubeMap(renderer, scene);
reflectorObj.visible = true;
renderer.render(scene, camera);
requestAnimationFrame(animate);
}
Appreciate any help!
Environment mapping in three.js is based on the assumption that the object being reflected is "infinitely" far away from the reflective surface.
The reflected ray used in the environment map look-up does not emanate from the surface of the reflective material, but from the CubeCamera's center. This approximation is OK, as long as the reflected object is sufficiently far away. In your case it is not.
You can read more about this topic in this tutorial.
three.js r.58

Intermittent semi-transparent sphere in Three.js

I would like somebody to explain me how I can achieve the blue semi-transparent intermittent sphere of this example: (the one next to the intermittent red sphere)
http://threejs.org/examples/webgl_materials.html
I believe in first place that this is a matter of using the right material with the right settings (specially because the example is about materials) but not sure anyway.
Hopefully you do not feel my question does not deserve to be made here. I was trying to analyze it but definitely it is written in a non-friendly way for newbies, and I've not been able to separate this part from the rest, not I find an explanation anywhere else.
To create, for example, a partially transparent blue sphere, you could try:
var sphereGeom = new THREE.SphereGeometry( 40, 32, 16 );
var blueMaterial = new THREE.MeshBasicMaterial( { color: 0x0000ff, transparent: true, opacity: 0.5 } );
var sphere = new THREE.Mesh( sphereGeom, blueMaterial );
For more examples of creating semi-transparent materials, check out
http://stemkoski.github.io/Three.js/Translucence.html
If you want the sphere to fade in and out, you can change the transparency in your update or render function -- make the sphere a global object, also create a (global) clock object to keep track of the time in your initialization, for example, with
clock = new THREE.Clock();
and then in your update, you could, for example, write
sphere.material.opacity = 0.5 * (1 + Math.sin( clock.getElapsedTime() ) );

Three.js - Collision Detection - Obj Wavefront

I am trying to build a virtual tour inside a building (the whole building is an obj model) using three.js. Everything loads fine and the library is pretty straightforward. My most critical issue is that I can't implement collision detection with the camera, I tried using rays but I couldn't find a suitable example for my case.
My model load:
var loader = new THREE.OBJMTLLoader();
loader.addEventListener( 'load', function ( event ) {
var newModel = event.content;
newModel.traverse( function ( child ) {
if ( child instanceof THREE.Mesh ) {
child.castShadow = true;
child.receiveShadow = true;
}
} );
scene.add( newModel );
objects.push( newModel );
});
loader.load( 'model/model.obj', 'model/model.mtl' );
The camera creation (I don't know if it is relevant to the issue)
camera = new THREE.PerspectiveCamera(
45,
window.innerWidth / window.innerHeight,
1,
10000
);
camera.position.set( 0, 25, 0 );
camera.lookAt( 0, 0, 0 );
NOTE: The camera moves inside the model, I don't want to detect collision between two separate obj models, I want to detect collision (and stop the camera from passing through walls) inside one single model.
Any help will be greatly appreciated
Looking at the documentation for Raycaster in Three.js at http://threejs.org/docs/58/#Reference/Core/Raycaster, you can create a ray like Raycaster( origin, direction, near, far ). Perhaps for you this would look something like
var ray = new THREE.Raycaster(camera.position, cameraForwardDirection, camera.position, collisionDistance);
Where cameraForwardDirection is the direction in front of you camera. I think you can get this by doing something like:
var cameraForwardDirection = new THREE.Vector3(0,0,-1).applyMatrix4(camera.matrixWorld);
This should work because the camera points in the negative Z direction (hence the 0,0,-1) and we want to apply the orientation of the camera to this vector. This assumes you are only moving forward. If you wanted to check for collisions in other directions, you could cast rays in other directions.
collisionDistance would be the minimum distance for a collision. You can experiment with this to find what works with respect to the scale of things in your scene.
Once you have cast this ray, you will need to check for intersections. You can use the ray.intersectObject( object, recursive ) method. Since it seems like you just have that one model, it might look something like:
var intersects = ray.intersectObject(newModel, true);
if(intersects.length>0){
// stop the camera from moving farther into the wall
}

How to make reflective materials change when camera rotates

I was able to make some nice metal and glass looking materials by using Skybox Cube / environment mapping.
I have made my own controls which allow one to both orbit and move/look around like in FirstPersonControls.
The problem is, the reflections look convincing when I move around - I can see the reflections move and change accordingly to my camera movement. However when I look around (rotate the camera / change it's target), there is no change in the reflections, they are just static.
I can see the same behaviour in for example three.js/examples/webgl_materials_cubemap_escher.html - if I modify it to use FirstPersonControls, the material does not look reflective/refractive at all when I look around.
Here's how I setup the cubemaps, to be honest it's copied from some example and I don't understand all of it. But it works, except for this one issue...
createSkyBox = function(urlPrefix) {
var sceneCube = new THREE.Scene();
var path = urlPrefix;
var format = '.jpg';
var urls = [
path + 'px' + format, path + 'nx' + format,
path + 'py' + format, path + 'ny' + format,
path + 'pz' + format, path + 'nz' + format
];
var reflectionCube = THREE.ImageUtils.loadTextureCube( urls );
reflectionCube.format = THREE.RGBFormat;
var refractionCube = new THREE.Texture( reflectionCube.image, new THREE.CubeRefractionMapping() );
refractionCube.format = THREE.RGBFormat;
// Skybox
var shader = THREE.ShaderUtils.lib[ "cube" ];
shader.uniforms[ "tCube" ].value = reflectionCube;
var material = new THREE.ShaderMaterial( {
fragmentShader: shader.fragmentShader,
vertexShader: shader.vertexShader,
uniforms: shader.uniforms,
depthWrite: false,
side: THREE.BackSide
} );
var size = 8000;
mesh = new THREE.Mesh( new THREE.CubeGeometry( size, size, size ), material );
mesh.geometry.computeBoundingBox();
sceneCube.add( mesh );
this._threejs_cube_scene = sceneCube;
this._threejs_cube_mesh = mesh;
this._threejs_envmap = reflectionCube;
this._threejs_envmap_refraction = refractionCube;
this._threejs_scene.add( sceneCube );
}
And here's the way I create the material:
var material = new THREE.MeshLambertMaterial( { color: 0xff00, ambient: 0xaaaaaa, envMap: this._threejs_envmap});
I then use the material in renderer.overrideMaterial (I'm using EffectComposer, if it makes any difference)
EDIT: now that I think about it, I'm not sure.. my brain melts.. it might be how the real life works :) At least intuitively when I see the code in action, the staticness while rotating camera doesn't feel right. But maybe it's because in real life it's hard to look around (eye.lookAt()) without also moving ever so slightly (eye.position = xyz).
you should calculate the reflection vector in world space (inside your code for 'fragmentShader' which you don't show here). If it's in object space, or view (camera) space, it won't move naturally.
Yes, this may mean some finagling with the surface normals. To convert object space normals to world space normals, use the inverse transpose of the world matrix. You'll also need to get the view vector in worldspace coordinates in order to calculate the final worldspace reflection vector.
Another thing to consider that's simpler than changing the shader may be giving your camera an offset if you want it to rotate like a human head. Add it to an Object3d and set it to be offset from the Object3d's position by a small amount (an amount equivalent to the distance from the human center to the eye) then rotate the Object3d instead of the camera.
It's sort-of hard to tell what effect you want though from your description, because when you simply turn your eyeballs, a reflection doesn't change. It's the slight tilt of your head that changes it.

Resources