Rotation with negative scale - rotation

I'm creating a tool to rotate images in ThreeJs, but it doesn't work when dealing with negative scales.
The image is displayed in a Mesh created using a THREE.PlaneGeometry element and a material which maps to to correspongin image.
The tool is an object that has an element called gizmo (it's a small mesh) which is selected and dragged by the user to rotate the object.
To do the rotation I define an angle and an axis. The angle is defined by two vectors created using the the position of the gizmo (original and current) and the position of the Mesh.
var gizmoOriginalPosition = this.gizmoOriginalPosition.clone().applyMatrix4( this.matrixWorld );
var imagePosition = this.imageToTransformOriginalPosition.clone().applyMatrix4( this.imageToTransformParentOriginalMatrix );
var vector1 = gizmoOriginalPosition.sub( imagePosition ).normalize();
var vector2 = point.sub( imagePosition ).normalize();
var angle = Math.acos( vector1.dot( vector2 ) );
var axis = new THREE.Vector3( 0, 0, 1 );
var ortho = vector2.clone().cross( vector1 );
var _m = this.imageToTransformOriginalMatrix.clone();
this.tempMatrix.extractRotation( _m );
var q = new THREE.Quaternion().setFromRotationMatrix( this.tempMatrix );
var _axis = axis.clone().applyQuaternion( q );
var f = ortho.dot( _axis );
f = f > 0 ? 1 : -1;
angle *= -f;
var q = new THREE.Quaternion().setFromAxisAngle( axis, angle );
var Q = new THREE.Quaternion().multiplyQuaternions( this.imageToTransformOriginalQuaternion, q );
imageToTransform.quaternion.copy( Q );
The axis of rotation is always ( 0, 0, 1) because the Mesh is a plane in XY.
point is the new position of the gizmo using a plane of intersection.
The vectors to define the angle are in world coordinates. ortho is a vector to define the direction of the angle, so the Mesh rotates in the direction of the mouse pointer. I define the direction of the angle with the f value obtained using ortho and axis. The axis ( 0, 0, 1 ) is rotated so its direction is in world coordinates ( ortho is in world coordinates ).
This works as expected in almost every case, except when the Mesh has a negative scale in X and Y. Here the image rotates in the opposite direction to the mouse pointer.
Thanks.

Related

Rotation snapping, find next closest quaternion from an array in THREE.js

I am rotating a cube around a particular axis (x or y or z) by dragging my mouse, lets say while dragging I calculate how much angle rotation I have to appy.
Now I have my current cube rotation in quaternion, and also an array of quaternions containing quaternions of 0, 45, 90, 135, 180, 225, 270, 315, and 360 degrees.
When I am rotating my cube I want to find the closest quaternion from the array, lets say I am rotating in anti-clock and my cube is at 30, then the closest will be quaternion of 90 degrees from the array., similarly for 170 I should get 180 deg quaternion from the array.
currently I am maintaining a variable and depending upon the direction (clock or anti-clock) I am rotating the cube I am managing that variable and finding the next required quaternion from the array. But I need a more efficient way if it exists.
currently My code is doing something like this, If anyone have some solution about this, or a new way of doing this, then please help me
function handleDrag() {
let target = new THREE.Vector3();
raycaster.setFromCamera(mouseNDC, camera);
raycaster.ray.intersectPlane(Zplane, target);
let temp = target
target.sub(mesh.position); // final vector after drag
initial.sub(mesh.position); // initial vector
// get rotation direction using cross product
let xx = new THREE.Vector3().copy(target).normalize()
let yy = new THREE.Vector3().copy(prevDir).normalize()
let dir = yy.cross(xx).z
const angleBtwVec = angleInPlane(initial, target, Zplane.normal);
let quaternion = new THREE.Quaternion();
quaternion.setFromAxisAngle(new THREE.Vector3(0, 1, 0), angleBtwVec);
let effectiveQuatChange = new THREE.Quaternion().copy(initialCubeQuat)
effectiveQuatChange.multiply(quaternion)
// find next quaterion for which we need to compare for snapping
let check = next
if (dir > 0) { // then anti-clock
check = (next + 1) % 8
} else { // else rotation is in clock direction
check = (next - 1 + 8) % 8
}
// apply quaternion change
mesh.quaternion.copy(effectiveQuatChange)
let reqQuatArray = quaternionArrayYR
let angleDiff = toDegrees(mesh.quaternion.angleTo(reqQuatArray[check]))
console.log(angleDiff, check);
if (angleDiff <= 15) { // if mesh angle with next req quaternion is less than 15 degree, then set mesh quaternion to required quaternion
next = check
mesh.quaternion.copy(reqQuatArray[next]);
initialCubeQuat.copy(reqQuatArray[next]);
initial = temp;
}
prevDir = temp
}

Getting Z value on mesh for respective XY

I am trying to get the Z value on the mesh when i pass the X & Y coordinate. Sorry, i am new to three js.
I am using raycaster for the same. My plan is to set origin exactly above the point and direction just below it. So that it will intersect on mesh and will return me the respective values.
Here is my code:
for(var i=0;(i)<points.length;i++){
var pts = points[i];
var top = new THREE.Vector3(pts.x , pts.y , 50 );
var bottom = new THREE.Vector3( pts.x , pts.y , -50 );
//start raycaster
var raycaster = new THREE.Raycaster();
raycaster.set( top, bottom );
// calculate objects intersecting the picking ray
var intersects = rayCaster.intersectObjects(scene.getObjectByName('MyObj_s').children, false);
if (intersects.length > 0){
console.log(intersects[0].point);
}
}
However the above code results shows totally different X & Y positions, and definitely inaccurate Z values.
top
Object { x: 58.26593421875712, y: 63.505675324244834, z: 50 }
bottom
Object { x: 58.26593421875712, y: 63.505675324244834, z: -50 }
Result
Object { x: -2.9414508017947445, y: -13.236528362050667, z:
-2.0969017881066634 }
raycaster.set( top, bottom );
It seems you are not using Raycaster.set() correctly. As you can see in the documentation, the method expects an origin and a direction vector. In your code, you just pass in two points.
The first parameter origin represents the origin vector where the ray casts from.
The second parameter direction is a normalized (!) vector representing the direction of the ray.
three.js R104

Why do Camera and Object3D look opposite directions?

I have a basic question, but I could not find the answer.
I noticed that the following code:
const p = new THREE.Vector3();
const q = new THREE.Quaternion();
const s = new THREE.Vector3();
function setPositionAndRotation(o) {
o.position.set(1, 1, -1);
o.lookAt(0, 0, 0);
o.updateMatrix();
o.matrix.decompose(p, q, s);
console.log(JSON.stringify(q));
}
const camera = new THREE.PerspectiveCamera(60, window.innerWidth / window.innerHeight, .01, 1000);
var mesh = new THREE.Mesh(new THREE.Geometry(), new THREE.MeshBasicMaterial());
setPositionAndRotation(camera);
setPositionAndRotation(mesh);
<script src="https://cdnjs.cloudflare.com/ajax/libs/three.js/97/three.min.js"></script>
produces different quaternions for Camera and Object3D:
{"_x":-0.11591689595929515,"_y":0.8804762392171493,"_z":0.27984814233312133,"_w":0.36470519963100095}
{"_x":0.27984814233312133,"_y":-0.3647051996310009,"_z":0.11591689595929516,"_w":
(These are two quaternions pointing into opposite directions on Z axis.)
The problems lies in the bahavior of the lookAt function. I dug into source code of Object3d and I found this if
https://github.com/mrdoob/three.js/blob/master/src/core/Object3D.js#L331
if ( this.isCamera ) {
m1.lookAt( position, target, this.up );
} else {
m1.lookAt( target, position, this.up );
}
As you can see Object3D is handled differently than Camera. target and position are swapped.
Object3D's documentation says:
lookAt ( x : Float, y : Float, z : Float ) : null
Rotates the object to face a point in world space.
but the code does the opposite. It uses Matrix4's lookAt function
lookAt ( eye : Vector3, center : Vector3, up : Vector3, ) : this
Constructs a rotation matrix, looking from eye towards center oriented by the up vector.
putting target into eye, and position into center.
I can deal with that, but it is weird. Is there anybody able to explain why it is so?
r.97
In three.js, an unrotated object is considered to face its local positive-z axis.
The exception is a camera, which faces its local negative-z axis.
This design decision followed OpenGL convention.

Drawing lines between the Icosahedron vertices without wireframe material and with some line width using WEBGLRenderer

I'm new to threejs
I need to draw a sphere connected with triangles. I use Icosahedron to construct the sphere in the following way
var material = new THREE.MeshPhongMaterial({
emissive : 0xffffff,
transparent: true,
opacity : 0.5,
wireframe : true
});
var icogeo = new THREE.IcosahedronGeometry(80,2);
var mesh = new THREE.Mesh(icogeo, material);
scean.add(mesh);
But i need the width of the line to be more but line width won't show up in windows so i taught of looping through the vertices and draw a cylinder/tube between the vertices. (I can't draw lines because the LineBasicMaterial was not responding to Light.)
for(i=0;i<icogeo.faces.length;i++){
var face = icogeo.faces[i];
//get vertices from face and draw cylinder/tube between the three vertices
}
Can some one please help on drawing the tube/cylinder between two vector3 vertices?
**the problem i'm facing with wireframe was it was not smooth and i can't increase width of it in windows.
If you really want to create a cylinder between two points one way to do is to create it in a unit space and then transform it to your line. But that is very mathy.
An intuitive way to create it is to think about how would you do it in a unit space? A circle around the z axis (in x,y) and another one a bit down z.
Creating a circle in 2d is easy: for ( angle(0,360,360/numsteps) ) (x,y)=(sin(angle),cos(angle))*radius. (see for example Calculating the position of points in a circle).
Now the two butt ends of your cylinder are not in x,y! But If you have two vectors dx,dy you can just multiply your x,y with them and get a 3d position!
So how to get dx, dy? One way is http://en.wikipedia.org/wiki/Gram%E2%80%93Schmidt_process
which reads way more scary than it is. You start with your forward direction, which is your line. forward = normalize(end-start). Then you just pick a direction "up". Usually (0,1,0). Unless forward is already close to up, then pick another one like (1,0,0). Take their cross product. This gives you "left". Then take the cross product between "left" and "forward" to get "right". Now "left" and "right" are you dx and dy!
That way you can make two circles at the two ends of your line. Add triangles in between and you have a cylinder!
Even though I do believe it is an overkill for what you are trying to achieve, here is code that draws a capsule (cylinder with spheres at the end) between two endpoints.
/**
* Returns a THREE.Object3D cylinder and spheres going from top to bottom positions
* #param radius - the radius of the capsule's cylinder
* #param top, bottom - THREE.Vector3, top and bottom positions of cone
* #param radiusSegments - tessellation around equator
* #param openTop, openBottom - whether the end is given a sphere; true means they are not
* #param material - THREE.Material
*/
function createCapsule (radius, top, bottom, radiusSegments, openTop, openBottom, material)
{
radiusSegments = (radiusSegments === undefined) ? 32 : radiusSegments;
openTop = (openTop === undefined) ? false : openTop;
openBottom = (openBottom === undefined) ? false : openBottom;
var capsule = new THREE.Object3D();
var cylinderAxis = new THREE.Vector3();
cylinderAxis.subVectors (top, bottom); // get cylinder height
var cylinderGeom = new THREE.CylinderGeometry (radius, radius, cylinderAxis.length(), radiusSegments, 1, true); // open-ended
var cylinderMesh = new THREE.Mesh (cylinderGeom, material);
// get cylinder center for translation
var center = new THREE.Vector3();
center.addVectors (top, bottom);
center.divideScalar (2.0);
// pass in the cylinder itself, its desired axis, and the place to move the center.
makeLengthAngleAxisTransform (cylinderMesh, cylinderAxis, center);
capsule.add (cylinderMesh);
if (! openTop || ! openBottom)
{
// instance geometry
var hemisphGeom = new THREE.SphereGeometry (radius, radiusSegments, radiusSegments/2, 0, 2*Math.PI, 0, Math.PI/2);
// make a cap instance of hemisphGeom around 'center', looking into some 'direction'
var makeHemiCapMesh = function (direction, center)
{
var cap = new THREE.Mesh (hemisphGeom, material);
makeLengthAngleAxisTransform (cap, direction, center);
return cap;
};
// ================================================================================
if (! openTop)
capsule.add (makeHemiCapMesh (cylinderAxis, top));
// reverse the axis so that the hemiCaps would look the other way
cylinderAxis.negate();
if (! openBottom)
capsule.add (makeHemiCapMesh (cylinderAxis, bottom));
}
return capsule;
}
// Transform object to align with given axis and then move to center
function makeLengthAngleAxisTransform (obj, align_axis, center)
{
obj.matrixAutoUpdate = false;
// From left to right using frames: translate, then rotate; TR.
// So translate is first.
obj.matrix.makeTranslation (center.x, center.y, center.z);
// take cross product of axis and up vector to get axis of rotation
var yAxis = new THREE.Vector3 (0, 1, 0);
// Needed later for dot product, just do it now;
var axis = new THREE.Vector3();
axis.copy (align_axis);
axis.normalize();
var rotationAxis = new THREE.Vector3();
rotationAxis.crossVectors (axis, yAxis);
if (rotationAxis.length() < 0.000001)
{
// Special case: if rotationAxis is just about zero, set to X axis,
// so that the angle can be given as 0 or PI. This works ONLY
// because we know one of the two axes is +Y.
rotationAxis.set (1, 0, 0);
}
rotationAxis.normalize();
// take dot product of axis and up vector to get cosine of angle of rotation
var theta = -Math.acos (axis.dot (yAxis));
// obj.matrix.makeRotationAxis (rotationAxis, theta);
var rotMatrix = new THREE.Matrix4();
rotMatrix.makeRotationAxis (rotationAxis, theta);
obj.matrix.multiply (rotMatrix);
}

convert Point3D To Screen2D get wrong result in three.js

I use function like this in three.js 69
function Point3DToScreen2D(point3D,camera){
var p = point3D.clone();
var vector = p.project(camera);
vector.x = (vector.x + 1) / 2 * window.innerWidth;
vector.y = -(vector.y - 1) / 2 * window.innerHeight;
return vector;
}
It works fine when i keep the scene still.
But when i rotate the scene it made a mistake and return wrong position in the screen.It occurs when i rotate how about 180 degrees.It shoudn't have a position in screen but it showed.
I set a position var tmpV=Point3DToScreen2D(new THREE.Vector3(-67,1033,-2500),camera); in update and show it with css3d.And when i rotate like 180 degrees but less than 360 , the point shows in the screen again.Obviously it's a wrong position that can be telled from the scene and i haven't rotate 360 degrees.
I know little about the Matrix,So i don't know how the project works.
Here is the source of project in three.js:
project: function () {
var matrix;
return function ( camera ) {
if ( matrix === undefined ) matrix = new THREE.Matrix4();
matrix.multiplyMatrices( camera.projectionMatrix, matrix.getInverse( camera.matrixWorld ) );
return this.applyProjection( matrix );
};
}()
Is the matrix.getInverse( camera.matrixWorld ) redundant? I tried to delete it and it didn't work.
Can anyone help me?Thanks.
You are projecting a 3D point from world space to screen space using a pattern like this one:
var vector = new THREE.Vector3();
var canvas = renderer.domElement;
vector.set( 1, 2, 3 );
// map to normalized device coordinate (NDC) space
vector.project( camera );
// map to 2D screen space
vector.x = Math.round( ( vector.x + 1 ) * canvas.width / 2 ),
vector.y = Math.round( ( - vector.y + 1 ) * canvas.height / 2 );
vector.z = 0;
However, using this approach, points behind the camera are projected to screen space, too.
You said you want to filter out points that are behind the camera. To do that, you can use this pattern first:
var matrix = new THREE.Matrix4(); // create once and reuse
...
// get the matrix that maps from world space to camera space
matrix.getInverse( camera.matrixWorld );
// transform your point from world space to camera space
p.applyMatrix( matrix );
Since the camera is located at the origin in camera space, and since the camera is always looking down the negative-z axis in camera space, points behind the camera will have a z-coordinate greater than zero.
// check if point is behind the camera
if ( p.z > 0 ) ...
three.js r.71
Like the example above but you can check vector.z to determine if it's in front.
var vector = new THREE.Vector3();
var canvas = renderer.domElement;
vector.set( 1, 2, 3 );
// map to normalized device coordinate (NDC) space
vector.project( camera );
// map to 2D screen space
vector.x = Math.round( ( vector.x + 1 ) * canvas.width / 2 ),
vector.y = Math.round( ( - vector.y + 1 ) * canvas.height / 2 );
//behind the camera if z isn't in 0..1 [frustrum range]
if(vector.z > 1){
vector = null;
}
To delve a little deeper into this answer:
// behind the camera if z isn't in 0..1 [frustrum range]
if(vector.z > 1){
vector = null;
}
This is not true. The mapping is not continuous. Points beyond the far
plane also map to z-values greater than 1
What exactly does the z-value of a projected vector stand for? X and Y are in normalised clipspace [-1,1] , what about z?
Would this be true?
projectVector.project(camera);
var inFrontOfCamera = projectVector.z < 1;
Since the camera is located at the origin in camera space, and since the camera is always looking down the negative-z axis in camera space, points behind the camera will have a z-coordinate greater than 1.
//check if point is behind the camera
if ( p.z > 1 ) ...
NOTICE: If this condition is satisfied, then the projected coordinates need to be centrosymmetric
{x: 0.233, y: -0.566, z: 1.388}
// after transform
{x: -0.233, y: 0.566, z: 1.388}

Resources