I'm a bit stuck implementing rotation of the avatar (Ready Player Me) that follows user's perspective camera controlled by PointerLockControls.
Avatar has the following hierarchy:
-bone Neck (parent)
-bone Head (child)
The goal is to take the current camera rotation and split it between the Head and Neck.
I want the head to rotate not more than PI/2 (or even PI/4) each direction (XYZ). The rest of the rotation should be managed by the Neck. It should be fine even if the Neck will be static for X and Z and rotate only around Y.
If I do a simple subtraction following the formula (I skip the logic for signs):
neckRotationY = cameraRotationY < PI/2 ? 0 : cameraRotationY - (cameraRotation % PI/2);
headRotationY = cameraRotation % PI/2;
everything works until camera starts looking backwards. Instead of rotating around Y to look behind at 2PI, the camera is rotated around X and Z, so the simple formula doesn't work.
Could smdy get me any idea of how can I separate camera rotation into neck and head rotations?
Thanks!
Related
I'm trying to do something similar to this example, except instead of having the snow flakes flutter about in all directions I'm trying to animate these sprites in only one direction, like having the snow flakes fall to the ground.
The example above was able to load multiple sprites into one geometry since it can vary the rotations of the points object:
particles.rotation.x = Math.random() * 6;
particles.rotation.y = Math.random() * 6;
particles.rotation.z = Math.random() * 6;
However, this won't work if you're animating all the points in one direction. In this case, would I have to create a new geometry for each sprite, or is there a more efficient way to do this using just one geometry?
There are several options. Instead of rotating randomly, you could:
Decrease the y position on each frame with particles.position.y -= 0.01;. When it crosses a certain threshold (For example: y <= -100), move them back up to the origin (y = 100). You'll have to stagger a few Sprite objects so you don't notice the jump.
Rotate along the x-axis, so the spinning motion makes them go down when in front of the camera.
Since the snowflakes will spin up on the opposite side, you could use some fog to hide the far side, and give it a more wintry feel.
Animating via custom shaders, although this is much more complex if you don't know GLSL shader code.
I'm using ThreeJS, but this is a general math question.
My end goal is to position an object in my scene using 2D screen space coordinates; however, I want a specific z position in the perspective projection.
As an example, I have a sphere that I want to place towards the bottom left of the screen while having the sphere be 5 units away from the camera. If the camera were to move, the sphere would maintain its perceived size and position.
I can't use an orthographic camera because the sphere needs to be able to move around in the perspective projection. At some point the sphere will be undocked from the screen and interact with the scene using physics.
I'm sure the solution is somewhere in the camera inverse matrix, however, that is beyond my abilities at the current moment.
Any help is greatly appreciated.
Thanks!
Your post includes too many questions, which is out of scope for StackOverflow. But I’ll try to answer just the main one:
Create a plane Mesh using PlaneGeometry.
Rotate it to face the camera, place it 5 units away from the camera.
Add it as a child with camera.add(plane); so whenever the camera moves, the plane moves with it.
Use a Raycaster’s .setfromCamera(cords, cam)
then
.intersectObject(plane)
method to convert x, y screen cords into an x, y, z world position where it
intersects the plane. You can read about it in the docs.
Once it’s working, make the plane invisible with visible = false
You can see the raycaster working in this official example: https://threejs.org/examples/#webgl_geometry_terrain_raycast
I'm working on a simple Three.js demo that uses OrbitControls.js.
I'd like to change the behavior of panning in OrbitControls. Currently, when you pan the camera, it moves the camera in a plane that is perpendicular to the viewing direction. I'd like to change it so that the camera stays a constant distance from the ground plane and moves parallel to it. Google Earth uses a similar control setup.
Edit: I should have mentioned this detail in the first place, but I'd also like the point where you click and start dragging to remain directly under the cursor throughout the entire drag. There needs to be that solid connection between the mouse movement and what the user expects to happen on the screen. Otherwise, it feels as though I'm 'slipping' when I try to move around the scene.
Can someone give me a high-level explanation of how this might be done (with or without OrbitControls.js)?
EDIT: OrbitControls now supports panning parallel to the "ground plane", and it is the default.
To pan parallel to screen-space (the legacy behavior), set:
controls.screenSpacePanning = true;
Also available is MapControls, which has an API similar to that of Google Earth.
three.js r.94
Some time ago I was working on exactly this issue, i.e. adaptation of OrbitControls.js to map navigation.
Here's the code of MapControls.js.
Here's the demo of the controls.
I figured it out. Here's the overview:
Store the mousedown event somewhere.
When the mouse moves, get the new mousedown event.
For each of those points, find the points on the plane where those clicks are located (You'll need to put the points into camera space, transform them into world space, then fire a ray from the camera through each point to find their intersections with the plane. This page explains the ray-plane intersection test).
Subtract the world-space start intersection point from the world-space end intersection point to get the offset.
Subtract that offset from the camera's target point and you're done!
In the case of OrbitControl.js, the camera always looks at the target point, and its position is relative to that point. So when you change the target, the camera moves with it. Since the target always lies on the plane, the camera moves parallel to that plane (as long as you're panning).
You should set your camera 'up' to z axe:
camera.up.set(0,0,1)
And then, the main problem with OrbitControl is its panUp() function. It should be fixed.
My pull request : https://github.com/mrdoob/three.js/pull/12727
y axe is relative to camera axes and should be relative to a fixed plan in the world. To define the expected y axe, make a 90° rotation of camera x axe, based on world z axe.
v.setFromMatrixColumn( objectMatrix, 0 ); // get X column of objectMatrix
v.applyAxisAngle( new THREE.Vector3( 0, 0, 1 ), Math.PI / 2 );
v.multiplyScalar( distance );
panOffset.add( v )
Enjoy!
Currently, I'm taking each corner of my object's bounding box and converting it to Normalized Device Coordinates (NDC) and I keep track of the maximum and minimum NDC. I then calculate the middle of the NDC, find it in the world and have my camera look at it.
<Determine max and minimum NDCs>
centerX = (maxX + minX) / 2;
centerY = (maxY + minY) / 2;
point.set(centerX, centerY, 0);
projector.unprojectVector(point, camera);
direction = point.sub(camera.position).normalize();
point = camera.position.clone().add(direction.multiplyScalar(distance));
camera.lookAt(point);
camera.updateMatrixWorld();
This is an approximate method correct? I have seen it suggested in a few places. I ask because every time I center my object the min and max NDCs should be equal when their are calculated again (before any other change is made) but they are not. I get close but not equal numbers (ignoring the negative sign) and as I step closer and closer the 'error' between the numbers grows bigger and bigger. IE the error for the first few centers are: 0.0022566539084770687, 0.00541687811360958, 0.011035676399427596, 0.025670088917273515, 0.06396864345885889, and so on.
Is there a step I'm missing that would cause this?
I'm using this code as part of a while loop to maximize and center the object on screen. (I'm programing it so that the user can enter a heading an elevation and the camera will be positioned so that it's viewing the object at that heading and elevation. After a few weeks I've determined that (for now) it's easier to do it this way.)
However, this seems to start falling apart the closer I move the camera to my object. For example, after a few iterations my max X NDC is 0.9989318709122867 and my min X NDC is -0.9552042384799428. When I look at the calculated point though, I look too far right and on my next iteration my max X NDC is 0.9420058636660581 and my min X NDC is 1.0128126740876888.
Your approach to this problem is incorrect. Rather than thinking about this in terms of screen coordinates, think about it terms of the scene.
You need to work out how much the camera needs to move so that a ray from it hits the centre of the object. Imagine you are standing in a field and opposite you are two people Alex and Burt, Burt is standing 2 meters to the right of Alex. You are currently looking directly at Alex but want to look at Burt without turning. If you know the distance and direction between them, 2 meters and to the right. You merely need to move that distance and direction, i.e. right and 2 meters.
In a mathematical context you need to do the following:
Get the centre of the object you are focusing on in 3d space, and then project a plane parallel to your camera, i.e. a tangent to the direction the camera is facing, which sits on that point.
Next from your camera raycast to the plane in the direction the camera is facing, the resultant difference between the centre point of the object and the point you hit the plane from the camera is the amount you need to move the camera. This should work irrespective of the direction or position of the camera and object.
You are playing the what came first problem. The chicken or the egg. Every time you change the camera attributes you are effectively changing where your object is projected in NDC space. So even though you think you are getting close, you will never get there.
Look at the problem from a different angle. Place your camera somewhere and try to make it as canonical as possible (ie give it a 1 aspect ratio) and place your object around the cameras z-axis. Is this not possible?
I've been searching all evening but can't find the information I'm looking for, or even if it's possible, which is quite distressing ;)
I'm using Java3D and can't figure out how to rotate the camera in world space.
My left/right, and up/down rotation both happen on local space.
Meaning that if I move left and right, everything looks fine.
However if I look 90 degrees down, then look 90 degrees right, everything appears to be on its side.
Currently, I'm doing the following. This will result in the above effects:
TransformGroup cam = universe.getViewingPlatform().getViewPlatformTransform();
Transform3D trfcam = new Transform3D();
cam.getTransform(trfcam);
trfcam.mul(Camera.GetT3D()); //Gets a Transform3D containing how far to rotate left/right and how far to move left/right/forward/back
trfcam.mul(Camera.GetRot()); //Gets a t3d containing how far to rotate up/down
cam.setTransform(trfcam);
Alternatively, one thing I tried was rotating the root, but that rotates around 0, so if I ever move the camera away from 0, it goes bad.
Is there something available on the web that would talk me through how to achieve this kind of thing?
I've tried a lot of different things but just can't seem to get my head around it at all.
I'm familiar with the concept, as I've achieved it in Ogre3D, just not familiar with the law of the land in J3D.
Thanks in advance for replies :)
Store the ammound you have rotated around each axis (x and y), and when you try to rotate around the x axis for example, reverse the the rotation around y, do the rotate around x, then redo the rotation around y.
I'm not sure I understand your second question correctly. Since viewer and model transformations are dual, you can simulate camera moves by transforming the world itself. If you dont want to translate the x and y axis you are rotating around, just add another TransformGroup to the main TransformGroup you are using, and do the transforms in the new one.
Edit: The first solution is quite slow, so you can make a Transform3D out of the 3 transform you have to do:
Say you have rotated around the x axis (Translate3D xrot), and now you need to rotate around y:
Translate3D yrot = new Translate3D();
yrot.rotY(angle);
Translate3D temp = xot;
xrot.mul(yrot); // Dont forget the reverse order. xrot is the previous translate
xrot.mul(yrot); // xrot = xrot * yrot * xrot^-1
temp.transpose(); // Get the reverse transform of the old transform
xrot.mul(temp);
yrot = xrot; // Store it for future rotation around x axis
cam.setTransform(yrot);
It works similar for many transformations you make: reverse the previous done, do the transform, redo the old one. I hope it helps.