I'm using three.js and OrbitControls.js in combination, in a 3D app. Sometimes WebGLRenderer.render gets called with an undefined camera. This happens when I use the mouse to navigate the 3D model, using the controls delivered by OrbitConrols.
The undefined camera argument is weird, as my animate function (see code below) always calls WebGLRenderer.render with a well defined camera. Also, when instantiating OrbitControls, I give it a well defined camera. So the question is - how can it be that WebGLRenderer.render is at some point called with an undefined camera?
NOTE: The source code for the WebGLRenderer.render function can be seen in line 20426 in the three.js source code.
I attempted to locate all the potential call sites by searching for the text string render(. There are 14 such matches on the string render(, but none of these gives an undefined camera as argument. Thus, the trail was cold.
I tried stack tracing from the callee (the function body of the WebGLRenderer.renderfunction), but this merely lead back to some mix-it-all event hub. But it gave me a hint that the caller might be Javascript itself, calling from its DOM event system. That would explain why I couldn't find the call site in the three.js source code.
Thus perhaps the problem is associated with the point at which OrbitControls interacts with the Javascript event system. When initialising my app, I register an event on my OrbitControls instance. See code below. Could this be causing trouble? No camera is given as argument when this happens though :/
var myControls = require('../../instances/three/myControls');
var myRenderer = require('../../instances/three/myRenderer');
myControls.damping = 0.2;
myControls.domElement.addEventListener( 'change', myRenderer.render );
EDIT:
I'm using these versions:
three.js 0.81.0
three-orbit-controls 72.0.0
So, indeed the wrong-doing call to render was coming from some event system (the Javascript DOM event system). This module shows where the render function is given as callback to an event listener:
var myControls = require('../../instances/three/myControls');
var myRenderer = require('../../instances/three/myRenderer');
myControls.damping = 0.2;
myControls.addEventListener( 'change', myRenderer.render );
module.exports = {};
( where the myControls module looks like this:
var THREE = require('three');
var orbitControls = require('three-orbit-controls');
var myRenderer = require('../../instances/three/myRenderer');
var myCamera = require('./myCamera');
require('../../sideeffects/three/addCamera');
// add orbitControls to THREE:
var OrbitControls = orbitControls( THREE );
// make controls:
var controls = new OrbitControls( myCamera, myRenderer.domElement );
module.exports = controls;
)
However, it seems that OrbitControls.js has been changed in such a way that instances of the OrbitControls constructor are no longer DOM-elements. Thus, one CANNOT register events on them, as I was doing with the line:
myControls.addEventListener( 'change', myRenderer.render );
Instead, instances of the OrbitControls constructor has a new property: domElement which, as you might have guessed, is indeed a DOM element. Thus, I can fix the problem by changing the target of my event attachment to myControls.domElement. Thus the above line comes to look like this:
myControls.domElement.addEventListener( 'change', myRenderer.render );
and that solved the problem :)
EDIT:
It turns out that this line can actually be omitted completely! It is only necessary if there is no animation loop (a perpetual loop that calls requestAnimationFrame on each loop step)
Also, dampening can be enabled by doing this:
myControls.enableDamping = true;
and then adding myControls.update(); somewhere in your animation loop.
Related
I have This Example , as you can see - the event that used for adding decals to the object is 'pointerup', like in the following function :
window.addEventListener( 'pointerup', function ( event ) {
if ( moved === false ) {
checkIntersection( event.clientX, event.clientY );
if ( intersection.intersects ) shoot();
}
} );
I wonder how can i add decals while the mouse/ pointer are pressed - so if i could do it - it will be like the action of drawing - which is what i want to achieve...
The problem is that i cant figure out which event and function should i use for repeatedly track each move and append it...
The problem is that i cant figure out which event and function should i use for repeatedly track each move and append it...
You can do this by combining pointerdown, pointerup and pointermove event listeners. Use the first and second one to manage a boolean variable e.g. drawing. On pointerdown, you set it to true. On pointerup, you set it to false. You then know when the interaction is in the drawing state.
In the pointermove event listener, you check for drawing. If set to true, you execute the actual drawing logic. The official three.js example webgl_materials_texture_canvas demonstrates this workflow. The idea of the example is to draw on a canvas which is used as a texture for a cube.
For the learning purposes, I downloaded a layalty-free FBX model from a website, which happens to be a helicopter. I want to emulate the rotation of the helicopter blades programmatically in Three.js. I imported the moded successfully by means of FBXLoader, without any problem. I checked its meshes in Blender, and it has more than fifty meshes. I pinpointed the blades' meshes and wrote this in the load() function:
pivotPoint = new THREE.Object3D();
const loader = new THREE.FBXLoader();
group = new THREE.Object3D();
loader.load(
'Apache.fbx',
object => {
scene.add(object);
const twentyFive = scene.getObjectByName('Mesh25'); //This is the shaft which the blades should rotate around
console.log(twentyFive); //x: 685.594482421875, y: 136.4067840576172, z: -501.9534606933594
twentyFive.add(pivotPoint);
const twentyEight = scene.getObjectByName('Mesh28');//These four are the blades
const twentyNine = scene.getObjectByName('Mesh29');
const twentySeven = scene.getObjectByName('Mesh27');
const twentySix = scene.getObjectByName('Mesh26');
group.add(twentyEight);
group.add(twentyNine);
group.add(twentySeven);
group.add(twentySix);
pivotPoint.add(group);
scene.add(pivotPoint);
scene.add(twentyFive);
},
progress => ...,
error => ...
);
and the following in the loop render function:
pivotPoint.rotation.y += 0.01;
However, either the four blades disappear once I add the nesting Object3Ds or upon changing the code into the above version with numerous mutations, the four blades would strangely rotate around some other point in sky, apart from the fuselage, while the awe-stricken pilot watches the catastrophe and amazed by the aforementioned code, as if the helicopter is about to crash any second!
I tried many changes to the code. Basically I had once used the Object3D parenting for some light sources on another scene, but have no idea what's the issue now. Besides, the rotation of the blades around Mesh25 (my wished pivot) is around a big circle with no contacts with the fuselage, although all four are beautifully revolve around their center of mass.
I really appreciate any help, as I really need to learn to wrestle with similar imported models.
Use attach instead of add in the appropriate places.
const twentyFive = scene.getObjectByName('Mesh25');
// add the pivot and group first so they are in the scene
pivotPoint.add(group);
twentyFive.add(pivotPoint);
const twentyEight = scene.getObjectByName('Mesh28');
const twentyNine = scene.getObjectByName('Mesh29');
const twentySeven = scene.getObjectByName('Mesh27');
const twentySix = scene.getObjectByName('Mesh26');
// use attach to move something in the scene hierarchy without
// changing its position
group.attach(twentyEight);
group.attach(twentyNine);
group.attach(twentySeven);
group.attach(twentySix);
This assumes the model is created correctly in the first place and that the the shaft's position Mesh25 is in the center of the shaft.
Note: If the shaft's origin is in the correct position and the blades are already children of the shaft you can just rotate the shaft.
I'm fairly new to three.js and trying to get a better understanding of ray casting. I have used it so far in a game to show when an object collides with another on the page which works perfectly. In the game I'm building this is to take health from the hero as it crashes into walls.
I am now trying to implement a target which when hovered over some objects it will auto shoot. However the target only registers a target hit once the object passes through the target mesh rather than when its ray is cast through it.
to further detail this, the ray is cast from the camera through the target, if the target is on a mesh (stored as an object array) then I want it to trigger a function.
within my update function I have this:
var ray = new THREE.Raycaster();
var crossHairClone = crossHair.position.clone();
var coards = {};
coards.x = crossHairClone.x
coards.y = crossHairClone.y
ray.setFromCamera(coards, camera);
var collisionResults = ray.intersectObjects( collidableMeshList );
if ( collisionResults.length > 0 ) {
console.log('Target Hit!', collisionResults)
}
The console log is only triggered when the collidableMeshList actually touches the target mesh rather than when it is aiming at it.
How do I extend the ray to pass through the target (which I thought it was already doing) so that if anything hits the ray then it triggers my console log.
Edit
I've added a URL to the game in progress. There are other wider issues with the game, my current focus is just the target ray casting.
Game Link
Many thanks to everyone who helped me along the way on this one but I finally solved it, only not through the means that I had expected to. Upon reading into the setFromCamera() function it looks like it is mainly used around mouse coards which in my instance was not what I wanted. I wanted a target on the page and the ray to pass through this. For some reason my Ray was shooting vertically up from the centre of the whole scene.
In the end I tried a different approach and set the Rays position and direction directly rather rely on setting it from the cameras position.
var vector = new THREE.Vector3(crossHair.position.x, crossHair.position.y, crossHair.position.z);
var targetRay = new THREE.Raycaster(camera.position, vector.sub(camera.position).normalize());
var enemyHit = targetRay.intersectObject( enemy );
if ( enemyHit.length > 0 ) {
console.log('Target Hit!', targetRay)
}
I'll leave this up for a while before accepting this answer in case anyone has a better approach than this or has any corrections over what I have said in regards to setFromCamera().
I have two different threejs scenes and each has its own camera. I can control each camera individually with a corresponding TrackballControls instance.
Is there a reliable way to 'lock' or 'bind' these controls together, so that manipulating one causes the same camera repositioning in the other? My current approach is to add change listeners to the controls and update both cameras to either's change, but this isn't very neat as, for one, both controls can be changing at once (due to dampening).
I believe it should work if you set the matrices of the second camera to the values of the first and disable automatic matrix-updates of both cameras:
camera2.matrix = camera1.matrix;
camera2.projectionMatrix = camera1.projectionMatrix;
camera1.matrixAutoUpdate = false;
camera2.matrixAutoUpdate = false;
But now you need to update the matrix manually in your renderloop:
camera1.updateMatrix();
That call will take the values for position, rotation and scale (that have been updated by the controls) and compose them into camera1.matrix, which per assignment before is also used as the matrix for the second camera.
However, this feels a bit hacky and can lead to all sorts of weird problems. I personally would probably prefer the more explicit approach you have already implemented.
Question is why are you even using two camera- and controls-instances? As long as the camera isn't added to the scene you can just render both scenes using the same camera.
Is it possible to use the Observer or Publisher design patterns to control these objects?
It seems that you are manipulating the cameras with a control. You might create an object that has the same control interface, but when you pass a command to the object, it repeats that same command to each of the subscribed or registered cameras.
/* psuedo code : es6 */
class MasterControl {
constructor(){
this.camera_bindings = [];
}
control_action1(){
for( var camera of this.camera_bindings ){
camera.control_action1();
}
}
control_action2( arg1, arg2 ){
for( var camera of this.camera_bindings ){
camera.control_action2( arg1, arg2 );
}
}
bindCamera( camera ){
if( this.camera_bindings.indexOf( camera ) === -1 ){
this.camera_bindings.push( camera );
}
}
}
var master = new MasterControl();
master.bindCamera( camera1 );
master.bindCamera( camera2 );
master.bindCamera( camera3 );
let STEP_X = -5;
let STEP_Y = 10;
//the following command will send the command to all three cameras
master.control_action2( STEP_X, STEP_Y );
This binding is self created rather than using native three.js features, but it is easy to implement and can get you functional quickly.
Note: I wrote my psuedocode in es6, because it is simpler and easy to communicate. You can write it in es5 or older, but you must change the class definition into a series of functional object definitions that create the master object and its functionality.
I am a complete newbie to 3d programming and have been working with three.js just over a week now. I have managed to load multiple collada and obj files, and managed to get perspectives, trackballs, everything working from examples. However, now I am stuck and need some help. For reference you can see the file i'm posting about at the following url:
http://shaman-labz.appspot.com/webgl_loader_obj_mtl.html
This page is basically straight from the examples, except that i am loading up an obj file, which has all these objects as meshes. what i am working on now is after loading the obj, i am going to iterate through all the geometries in the object and extract them so that i can drop them in the scene one a time when i need them, or have them float around like bubbles or something. I was thinking to maybe try to use the fresnel example, but this is where i'm hitting up against the boundaries of my understanding, and some terminology escapes me.
My question is, when this runs, the object that returns and is added to the scene after load has all these gems in it together.
So instead of the following lines:
var loader = new THREE.OBJMTLLoader();
loader.addEventListener( 'load', function ( event ) {
var object = event.content;
object.position.y = - 100;
scene.add( object );
});
loader.load( 'obj/gems/24.obj', 'obj/gems/24.mtl' );
So what i'm doing is when the object returns, i look at the internals in debug/break mode, and i see that it's an object3D and object.children is an array of 25 meshes...and each of those meshes would be one of my 'gems' that i want to work with individually.
So here's where i get lost...when i grab the 'mesh' do i need to strip out the underlying geometry and create a new mesh?
on this page, you can see what i tried to accomplish:
http://shaman-labz.appspot.com/webgl_loader_obj_mtl2.html
the only major difference is in this section of code:
var loader = new THREE.OBJMTLLoader();
loader.addEventListener( 'load', function ( event ) {
var object = event.content;
var pos=0;
for(var i=0; i< object.children.length; i++){
var m = object.children[i];
var gem = new THREE.Object3D();
gem.name=m.name;
gem.add(m);
gem.position.x = -10;
gem.position.y = -10;
gem.position.z = pos;
scene.add( gem );
pos = pos - 10;
}
});
loader.load( 'obj/gems/24.obj', 'obj/gems/24.mtl' );
notice that only 13 of the 25 gems show up from the collection, and also note how they are scattered, indicating that they are somehow still linked to some higher-order relationships that i am unable to set with position properly (it's like each mesh is somehow offset relative to it's original positioning in the original object loaded...thinking this has to do with world matrix?