I want to rotate circle around the center of the canvas.
I was trying to do it like in this tutorial: http://www.html5canvastutorials.com/kineticjs/html5-canvas-kineticjs-rotation-animation-tutorial/ (redRect - like this rectangle) but when I set offset, my circle is shifted from its original position.
How can I rotate my circle to make it orbit araund center of canvas without using offset?
You can use "old-fashioned" trigonometry:
Demo: http://jsfiddle.net/m1erickson/ZdZR4/
You can use javascript's requestAnimationFrame to drive the animation (or Kinetics internal animation if you prefer):
function animate(){
// request a new animation frame
requestAnimationFrame(animate);
// change the angle of rotation (in radians)
rotation+=Math.PI/180;
// use trigonometry to calculate the circle's new X/Y
var newX=cx+radius*Math.cos(rotation);
var newY=cy+radius*Math.sin(rotation);
// use circle.setPosition to move the circle
// into its new location
circle1.setPosition(newX,newY);
// redraw the layer so the changes are displayed
layer.draw();
}
Related
I have a 2D texture image that is zoomed in/out via 2-finger touch and pinch.
Currently the image is not panned, i.e. the center of the image is always in the middle.
I want the center point between the twoFinger touch to stay between the 2 fingers.
If I pinch exactly in the center of the image, then the center image point will stay between the fingers - good!
But if I pinch near the corner of the image the point will move away, relative to the 2 fingers, because of the zoom.
So I need to apply some pan in addition to the zoom, to make the point appear in the same place.
I basically need to transftorm the camera position such that, for every zoom, the same world coordinate is projected to the same screen coord.
Figures 1-3 illustrate the problem.
Figure1 is the original image.
Currently when I zoom in the camera stays in the same position, so the object between the 2 fingers (the cat's eye on the right) is drifted from being between the 2 fingers, as the image zooms (Figure 2).
I want to pan the camera such that the object between the 2 fingers stays between the 2 fingers even after the zooming the image (Figure 3).
I used the code below, but the object still drifts as the image zooms in/out.
How should I calculate the amount of shift that needs to be applied to the camera?
Thanks
Code to calculate the amount of shift that needs to be applied to the camera
handleTwoFingerTouchMove( p3_inScreenCoord) {
// normalize the screen coord to be in the range of [-1, 1]
// (See method1 in https://stackoverflow.com/questions/13542175/three-js-ray-intersect-fails-by-adding-div/)
let point2dNormalizedX = ( ( p3_inScreenCoord.x - windowOffset.left ) / windowWidth) * 2 - 1;
let point2dNormalizedY = -( ( p3_inScreenCoord.y - windowOffset.top ) / windowHeight) * 2 + 1;
// calc p3 before zoom (in world coords)
let p3_beforeZoom = new THREE_Vector3( point2dNormalizedX, point2dNormalizedY, -1 ).unproject( this.camera );
// Apply zoom
this.dollyInOut( this.getZoomScale(), true );
// calc p3 after zoom (in world coords)
let p3_afterZoom = new THREE_Vector3( point2dNormalizedX, point2dNormalizedY, -1 ).unproject( this.camera );
// calc the required shift in camera position
let deltaX = p3_afterZoom.x - p3_beforeZoom.x;
let deltaZ = p3_afterZoom.z - p3_beforeZoom.z;
// shift in camera position
this.pan( deltaX, deltaZ );
};
I was able to solve my problem. Here is my solution in the hope that it helps others.
When first applying a 2-finger touch (i.e. on touchstart event), the code computes:
the world-coordinate of the object pointed at (e.g. the cat's eye on the right), when starting two-finger touch
centerPoint3d_inWorldCoord0 (Vector3)
the screen-coordinate anchor for zooming via two-finger touch
centerPoint2dBetweenTwoFingerTouch_inScreenCoordNormalized (Vector2)
While zooming in/out via two-finger pinch in/out (on touchmove event)
in the event listener function, immediately after the applying the zoom,
I call the following code:
// Calculate centerPoint3d_inWorldCoord2, which is the new world-coordinate, for
// centerPoint2dBetweenTwoFingerTouch_inScreenCoordNormalized given the new zoom setting.
let centerPoint3d_inWorldCoord2 = new THREE_Vector3( centerPoint2dBetweenTwoFingerTouch_inScreenCoordNormalized.x,
centerPoint2dBetweenTwoFingerTouch_inScreenCoordNormalized.y,
-1 ).unproject( camera );
// compute the shift in world-coordinate between the new vs the original world-coordinate
let delta_inWorldCoords = new THREE_Vector2(centerPoint3d_inWorldCoord2.x - centerPoint3d_inWorldCoord0.x,
centerPoint3d_inWorldCoord2.z - centerPoint3d_inWorldCoord0.z);
// pan the camera, to compensate the shift
pan_usingWorldCoords( delta_inWorldCoords );
The function pan_usingWorldCoords shifts the camera in the x axis (panLeft), and then in the y axis (panUp)
pan_usingWorldCoords( delta_inWorldCoord ) {
panLeft( delta_inWorldCoord.x );
panUp( delta_inWorldCoord.y );
};
The functions panLeft, panUp are similar to the functions that are used in three.js-r114/examples/jsm/controls/OrbitControls.js
Initially the object pointed at was drifting from being between the 2 fingers, as the image zoomed in/out.
I added this.camera.updateProjectionMatrix() at the end of each function.
This updates the projection matrix at the end of panLeft, before using it again in panUp.
With the code above and after updating the projection matrix at the end of panLeft, panUp, the object pointed at (e.g. the eye on the right), when starting two-finger touch, is kept between the 2 fingers, while zooming via two-finger pinch.
I have a special control called SphericalControls. Its similar to OrbitControls, but it keeps camera at position 0,0,0 and instead rotates camera on x and y to look around a scene. It is placed in the middle of a SphereBufferGeometry which has a 360 equirectangular image projected upon it. The user can look around the 360 image, and as he does the camera x and y rotation values change.
When a user clicks a button, I need to take these x and y rotation values and rotate the sphere to the rotation of the camera. I then set camera back to x:0 and y:0.
The result is that the camera is reset and the 360 scene has now rotated to show the same rotation view that the camera was previously looking at. So to the user, the view stays basically static, just the values for camera.rotation and sphere rotation have swapped.
This works great if I offset the texture on the sphere:
sphereObj.material.map.wrapS = THREE.RepeatWrapping;
sphereObj.material.map.offset.x = ((camera.rotation.x) / (Math.PI * 2));
sphereObj.material.map.needsUpdate = true;
sphereObj.material.needsUpdate = true;
camera.rotation.set(0, 0);
// Success!
But what I need to do is not offset the texture, but rotate the entire geometry. I have tried:
var axis = new THREE.Vector3(0, 1, 0).normalize();;
var offsetRadian = ((camera.rotation.x) / (Math.PI * 2));
sphere.rotateOnAxis(axis, offsetRadian);
// Fail
But the result is that the sphere rotation is off by approx 30%. Any help is appreciated.
Every objects' rotational data is stored in their respective .quaternion object. Both camera and sphereObj have a quaternion, so what you could do is copy the camera's rotational data into the sphere:
// Get camera's rotation
targetRotation = camera.quaternion;
// Invert rotation
targetRotation.inverse();
// Set sphere's rotation
sphereObj.quaternion.copy(targetRotation);
camera.rotation.set(0, 0, 0);
I'm not entirely sure if you need the .inverse() line... if you're noticing the sphere is rotating in the opposite direction, just get rid of it to get the desired result.
I'm struggling with the positioning of some aframe text geometry and am wondering if I'm going about this the wrong way 😅
I'm finding that when the box renders, the center point is at the minimum point of all the axises (bottom-left-close). This means the text expands more to the top-right-far than I would expect. This is different from aframe geometry entitites where the center point is at the very center of all axises.
Sorry if the above phrasing is confusing, I'm still not sure how to best describe things in a 3d space 😆
What I'm thinking I need to do is calculate the bounding box after the element has loaded and change the position to the center. I've based that approach on the answer here AFRAME text-geometry component rotation from center?.
Does that seem like the right direction? If so, I'm currently trying to do this through an aframe component
aframe.registerComponent('center-all', {
update() {
// Need to wait for the element to be loaded
setTimeout(() => {
const mesh = this.el.getObject3D('mesh');
const bbox = new THREE.Box3().setFromObject(this.el.object3D);
const offsetX = (bbox.min.x - bbox.max.x) / 2;
const offsetY = (bbox.min.y - bbox.max.y) / 2;
const offsetZ = (bbox.min.z - bbox.max.z) / 2;
mesh.position.set(offsetX, offsetY, offsetZ);
}, 0);
}
});
This code illustrates the problem I'm seeing
This code shows my attempted solution
This code (with the translation hard coded) is more like what I would like
TextGeometry and TextBufferGeometry are both subclasses of the respective geometry classes, and so both have the boundingBox property. You just need to compute it, then get its center point:
textGeo.computeBoundingBox();
const center = textGeo.boundingBox.getCenter(new Vector3());
Then center will accurately reflect the center of the geometry, in local space. If you need it in global space, you will need to apply the matrix of the mesh that contains textGeo to the center vector, e.g.
textMesh.updateMatrixWorld();
center.applyMatrix4(textMesh.matrixWorld);
I hopefully have a simple problem I can't get an answer to.
I have three js geometric spheres which move in a box. I place this box at the centre of the scene. The mechanics of how the spheres stay in the box is irrelevant. What is important is the spheres move about the origin (0,0) and the canvas always fills the page.
I want to draw a line from the moving spheres to a div or img element on the page. To do this I would assume I have to transform the css coordinates to three js coordinates. I found something I thought did something like this (Note: Over use of somethings to signify I am probably mistaken)
I can add a html element to the same scene/camera as webgl renderer but obviously using a different renderer but I am unsure how to proceed from there?
Basically I want to know:
How should I change the size of the div preserving aspect ratio if need be?
In essence I want the div or element to fill screen at some camera depth.
How to place the div at the centre of the scene by default?
Mines seems to be shifted 1000 in z direction but this might be the size of the div(img) which I have to bring into view.
How to draw a line between the webgl sphere and html div/img?
thanks in advance!
Unfortunately you have asked 3 questions, it is tricky to address them all at once.
I will explain how to position DIV element on top of some 3D object. My example would be a tooltip that appears when you hover the object by mouse: http://jsfiddle.net/mmalex/ycnh0wze/
So let's get started,
First of all you need to subscribe mouse events and convert 2D coordinates of a mouse to relative coordinates on the viewport. Very well explained you will find it here: Get mouse clicked point's 3D coordinate in three.js
Having 2D coordinates, raycast the object. These steps are quite trivial, but for completeness I provide the code chunk.
var raycaster = new THREE.Raycaster();
function handleManipulationUpdate() {
// cleanup previous results, mouse moved and they're obsolete now
latestMouseIntersection = undefined;
hoveredObj = undefined;
raycaster.setFromCamera(mouse, camera);
{
var intersects = raycaster.intersectObjects(tooltipEnabledObjects);
if (intersects.length > 0) {
// keep point in 3D for next steps
latestMouseIntersection = intersects[0].point;
// remember what object was hovered, as we will need to extract tooltip text from it
hoveredObj = intersects[0].object;
}
}
... // do anything else
//with some conditions it may show or hide tooltip
showTooltip();
}
// Following two functions will convert mouse coordinates
// from screen to three.js system (where [0,0] is in the middle of the screen)
function updateMouseCoords(event, coordsObj) {
coordsObj.x = ((event.clientX - renderer.domElement.offsetLeft + 0.5) / window.innerWidth) * 2 - 1;
coordsObj.y = -((event.clientY - renderer.domElement.offsetTop + 0.5) / window.innerHeight) * 2 + 1;
}
function onMouseMove(event) {
updateMouseCoords(event, mouse);
handleManipulationUpdate();
}
window.addEventListener('mousemove', onMouseMove, false);
And finally see the most important part, DIV element placement. To understand the code it is essential to get convenient with Vector3.project method.
The sequence of calculations is as follows:
Get 2D mouse coordinates,
Raycast object and remember 3D coordinate of intersection (if any),
Project 3D coordinate back into 2D (this step may seem redundant here, but what if you want to trigger object tooltip programmatically? You won't have mouse coordinates)
Mess around to place DIV centered above 2D point, with nice margin.
// This will move tooltip to the current mouse position and show it by timer.
function showTooltip() {
var divElement = $("#tooltip");
//element found and mouse hovers some object?
if (divElement && latestMouseIntersection) {
//hide until tooltip is ready (prevents some visual artifacts)
divElement.css({
display: "block",
opacity: 0.0
});
//!!! === IMPORTANT ===
// DIV element is positioned here
var canvasHalfWidth = renderer.domElement.offsetWidth / 2;
var canvasHalfHeight = renderer.domElement.offsetHeight / 2;
var tooltipPosition = latestMouseProjection.clone().project(camera);
tooltipPosition.x = (tooltipPosition.x * canvasHalfWidth) + canvasHalfWidth + renderer.domElement.offsetLeft;
tooltipPosition.y = -(tooltipPosition.y * canvasHalfHeight) + canvasHalfHeight + renderer.domElement.offsetTop;
var tootipWidth = divElement[0].offsetWidth;
var tootipHeight = divElement[0].offsetHeight;
divElement.css({
left: `${tooltipPosition.x - tootipWidth/2}px`,
top: `${tooltipPosition.y - tootipHeight - 5}px`
});
//get text from hovered object (we store it in .userData)
divElement.text(hoveredObj.userData.tooltipText);
divElement.css({
opacity: 1.0
});
}
}
I'm trying to create a camera that follows an object that rotates on a orbit around a sphere. But everytime the camera reaches the polar coordinates of the orbit, the direction changes. I just set the position of the camera according to the object that is has to follow and calling lookAt afterwards:
function render() {
rotation += 0.002;
// set the marker position
pt = path.getPoint( t );
// set the marker position
marker.position.set( pt.x, pt.y, pt.z );
marker.lookAt( new THREE.Vector3(0,0,0) );
// rotate the mesh that illustrates the orbit
mesh.rotation.y = rotation
// set the camera position
var cameraPt = cameraPath.getPoint( t );
camera.position.set( cameraPt.x, cameraPt.y, cameraPt.z );
camera.lookAt( marker.position );
t = (t >= 1) ? 0 : t += 0.002;
renderer.render( scene, camera );
}
Here's a complete fiddle: http://jsfiddle.net/krw8nwLn/69/
I've created another fiddle with a second cube which represents the desired camera behaviour: http://jsfiddle.net/krw8nwLn/70/
What happens is that the camera's lookAt function will always try to align the camera with the horizontal plane (so that the "up" direction is always (0, 1, 0). And when you reach the top and bottom of the ellipse path, the camera will instantaneously rotate 180° so that up is still up. You can also see this in your "desired behaviour" example, as the camera cube rotates so that the colors on the other side are shown.
A solution is to not use lookAt for this case, because it does not support cameras doing flips like this. Instead set the camera's rotation vector directly. (Which requires some math, but you look like a math guy.)