When I finish playing an animation with camera movement, I want the cameraContols to pick up where the clamped animation left off. But everything I have tried so results in the camera jumping to a different location. (The lookAt position seems to be OK.)
I have tried capturing the animeCamera’s attributes and resetting them after replacing the controls’ .camera - but no success.
Any suggestions or examples to look at?
AnimationMixer = new THREE.AnimationMixer(gltf.scene);
var that = this;
AnimationMixer.addEventListener('finished', function(e){
// replace default camera with animation camera
that.controls.camera = animeCamera;
that.controls.update();
that.controls.enabled = true;
});
Related
I'm rather new to threejs, so what I'm doing might not be the most efficient way.
I have an object in AR on a mobile device and I want to know if I intersect with it when touching on the screen.
I use the following code to generate the raycast, and it works initally.
const tempMatrix = new THREE.Matrix4();
tempMatrix.identity().extractRotation(this.controller.matrixWorld);
this.raycaster.ray.origin.setFromMatrixPosition(this.controller.matrixWorld);
this.raycaster.ray.direction.set(0, 0, -1).applyMatrix4(tempMatrix);
However, I have the ability to reposition the object (i.e. reset the position so the object is in front, relative to the current camera direction and position) by moving and rotating the whole scene.
After the repositioning, the raycasting is completely offset and is not casting rays anywhere near where I touch the screen.
Repositioning is done like this (while it works, if there's a better way, let me know!) :
public handleReposition(): void {
const xRotation = Math.abs(this.camera.rotation.x) > Math.PI / 2 ? -Math.PI : 0;
const yRotation = this.camera.rotation.y;
this.scene.rotation.set(xRotation, yRotation, xRotation);
this.scene.position.set(this.camera.position.x, this.camera.position.y, this.camera.position.z);
}
How can I achieve to raycast to the correct new location?
Thanks!
Assuming this.scene is actually the main threejs Scene, it's usually a bad idea to change its rotation or position, since it will affect everything inside the scene, including the controller. I'd suggest moving your object instead, or add your object(s) to a Group and move that.
I'm trying to make a cube mesh to be always positioned in front of the XR camera.
No matter how I move my phone camera, the cube should appear right in front of the camera showing only one side of the cube.
Firstly, I added a cube mesh to the scene in the beginning:
material = new THREE.MeshLambertMaterial({ color: 0x9797CE });
box = new THREE.Mesh(new THREE.CubeGeometry(1, 1, 1), material);
box.position.set(0, 0, -3);
scene.add(box);
And then tried to draw the box in front of the XR camera:
function animate() {
let xrCamera = renderer.xr.getCamera(camera);
box.position.set(xrCamera.position.x, xrCamera.position.y, xrCamera.position.z - 3);
box.rotation.set(xrCamera.rotation.x, xrCamera.rotation.y, xrCamera.rotation.z);
renderer.render(scene, camera);
}
When I run the code, the cube appears in front of my phone camera.
But when I rotate my phone, the cube rotates itself in the same position not following the camera.
I also tried xrCamera.add(box) but it doesn't seem to work.
How can I correctly make the cube always appear still in front of the XR camera?
It's important to know that currently (r115) the transformation properties position, rotation and scale as well as the local matrix of the XR camera are not updated.
So instead of adding the box to xrCamera, add it to camera. Besides, keep in mind that WebXRManager.getCamera() is intended for internal use only and no part of the public API.
I have been solving a similar problem. I needed to get a point in front of camera using aframe API. But the challege was when the experience were on VR mode(fullscreen) and playing on movile or headset. In this context the management of the current camera is absolutely controlled by WebXR. With WebXR THREE applies headset pose to the object3D internally.
You only can use the matrixWorld of the three camera to access the camera world reference data, other properties or methods are not correct. In the case of aframe you must access to the object3D of the aframe camera entity and manage its matrixWorld. It is the only method to get correct information of the position/rotation/scale of the camera that it is move by the sensors of a movile or of a AR/VR goggles when the play is on VR/AR mode.
I use to get the in front of Camera Position With WebXR Headset Pose:
const distanceFromCamera = 250; // the depth in the screen, what ever you need
const inFrontOfCameraPosition = new AFRAME.THREE.Vector3( 0, 0, -distanceFromCamera );
const threeSceneCamera = <THREE.PerspectiveCamera>AFRAME.scenes[0].camera;
inFrontOfCameraPosition.applyMatrix4( threeSceneCamera.matrixWorld );
return { x: inFrontOfCameraPosition.x, y: inFrontOfCameraPosition.y, z: inFrontOfCameraPosition.z };
I am trying to make use of Raycaster in a ThreeJS scene to create a sort of VR interaction.
Everything works fine in normal mode, but not when I enable stereo effect.
I am using the following snippet of code.
// "camera" is a ThreeJS camera, "objectContainer" contains objects (Object3D) that I want to interact with
var raycaster = new THREE.Raycaster(),
origin = new THREE.Vector2();
origin.x = 0; origin.y = 0;
raycaster.setFromCamera(origin, camera);
var intersects = raycaster.intersectObjects(objectContainer.children, true);
if (intersects.length > 0 && intersects[0].object.visible === true) {
// trigger some function myFunc()
}
So basically when I try the above snippet of code in normal mode, myFunc gets triggered whenever I am looking at any of the concerned 3d objects.
However as soon as I switch to stereo mode, it stops working; i.e., myFunc never gets triggered.
I tried updating the value of origin.x to -0.5. I did that because in VR mode, the screen gets split into two halves. However that didn't work either.
What should I do to make the raycaster intersect the 3D objects in VR mode (when stereo effect is turned on)?
Could you please provide a jsfiddle with the code?
Basically, if you are using stereo in your app, it means you are using 2 cameras, therefore you need to check your intersects on both cameras views, this could become an expensive process.
var cameras =
{ 'camera1': new THREE.PerspectiveCamera(50, window.innerWidth / window.innerHeight, 1, 10000),
'camera2': new THREE.PerspectiveCamera(50, window.innerWidth / window.innerHeight, 1, 10000)
};
for (var cam in cameras) {
raycaster.setFromCamera(origin, cameras[cam]);
//continue your logic
}
You could use a vector object that simulates the camera intersection to avoid checking twice, but this depends on what you are trying to achieve.
I encountered a similar problem, I eventually found the reason. Actually in StereoEffect THREE.js displays the meshes on the two eyes, but in reality is actually adds only one mesh to the scene, exactly in the middle of the line left-eye-mesh <-> right-eye-mesh, hidden to the viewer.
So when you use the raycaster, you need to use it on the real mesh on the middle, not the illusion displayed on each eye !
I detailled here how to do it
Three.js StereoEffect displays meshes across 2 eyes
Hopes it solves your problem !
You can use my StereoEffect.js file in your project for resolving problem. See example of using. See my Raycaster stereo pull request also.
I'm new in THREE.js.
I'm trying to get 3D coordinates of point on mouse click on the object (not simple objects: Box, Sphere,..) in Canvas.
In detail, I'm working with 3D objects viewer - I have camera (THREE.PerspectiveCamera), mouse controls (rotate, zoom, move), add/remove objects (my own object, loaded using loaders for THREE.js) in scene,.. And I want to add a function, which gets 3D coordinates for clicked point in 3D.
Exactly, I want coordinates of the end point of a ray - begining from mouse click on the camera_near_window and ending to the object's point, I've clicked on..
I tried a lot of ways to do it:
Getting coordinates of point on z=0 plane -- It works fine, but it is on z=0 plane and it is not that I need, cause I have OrbitControls..
THREE.js example - clickable objects -- It uses CanvasRenderer (not WebGLRenderer) and works for a little objects (but works for my project): browser crashes when I load many objects (CanvasRenderer needs 5x more memory then WebGLRenderer).
"How to get object in WebGL 3d space from a mouse click coordinate" - I tried this one too, but raycaster.intersectObjects found nothing, intersects was an empty array (maybe it works for only simple objects like box, sphere,..).
Can anyone show me the demo code which gets 3D point coords for clicked point of clicking object in 3D, please..?
So, as I think this question is useful for someone, I'll answer it myself (I'll write my resolve):
var renderer, canvas, canvasPosition, camera, scene, rayCaster, mousePosition;
function init() {
renderer = new THREE.WebGLRenderer({ antialias: false });
canvas = renderer.domElement;
canvasPosition = $(canvas).position();
camera = new THREE.PerspectiveCamera(20, $(canvas).width() / $(canvas).height(), 0.01, 1e10);
scene = new THREE.Scene();
rayCaster = new THREE.Raycaster();
mousePosition = new THREE.Vector2();
scene.add(camera);
var myObjects = new THREE.Object3D();
// myObjects.add( your object );
// myObjects.add( your object );
// myObjects.add( your object );
myObjects.name = 'MyObj_s';
scene.add(myObjects);
};
function getClicked3DPoint(evt) {
evt.preventDefault();
mousePosition.x = ((evt.clientX - canvasPosition.left) / canvas.width) * 2 - 1;
mousePosition.y = -((evt.clientY - canvasPosition.top) / canvas.height) * 2 + 1;
rayCaster.setFromCamera(mousePosition, camera);
var intersects = rayCaster.intersectObjects(scene.getObjectByName('MyObj_s').children, true);
if (intersects.length > 0)
return intersects[0].point;
};
i want to show a mesh (like gunshot) in front of my perspective camera(with first person controls) i wrote this code in the render function of my page:
var pos = camera.position;
var rot = camera.rotation;
shot.rotation.x = rot.x;
shot.rotation.y = rot.y;
shot.rotation.z = rot.z;
shot.position.x = pos.x;
shot.position.y= pos.y;
shot.position.z = pos.z + 500;
if i just change the position of my camera its good, but if i change the camera's rotation i don't see the shot in front of that.
how can i do this?
It would seem that you need to make the "shot" a child of the camera. It's not clear from your example whether you're doing that already, but this should make the shot move around with the camera properly.