I have a gameobject with an animator controller. The gameobject moves from the bottom of the screen to the top, this movement is being controlled by the animation of th Y axis.
What i am trying to do is to also move it randomly in the X axis While the this animation is moving it in the Y axis
What i did was to set a few animation events in this main animation:
In this events i am calling a script that moves the object between 2 x positions:
public void RandomX ()
{
var pos = transform.position;
pos.x = Mathf.Clamp(transform.position.x, -1.75f, 2.85f);
print(pos.x);
}
But this is not working. Tha animation plays with no modifications in the X axis.
Thanks in advance for any help provided
The animation overwrites the transform position. For your object to be controlled by code, you would have to change the hierarchy.
-object A with script controlling x position
- object B with animation
Now you can move the parent object A by code. The animation controls only the object B.
Related
I have a 2D texture image that is zoomed in/out via 2-finger touch and pinch.
Currently the image is not panned, i.e. the center of the image is always in the middle.
I want the center point between the twoFinger touch to stay between the 2 fingers.
If I pinch exactly in the center of the image, then the center image point will stay between the fingers - good!
But if I pinch near the corner of the image the point will move away, relative to the 2 fingers, because of the zoom.
So I need to apply some pan in addition to the zoom, to make the point appear in the same place.
I basically need to transftorm the camera position such that, for every zoom, the same world coordinate is projected to the same screen coord.
Figures 1-3 illustrate the problem.
Figure1 is the original image.
Currently when I zoom in the camera stays in the same position, so the object between the 2 fingers (the cat's eye on the right) is drifted from being between the 2 fingers, as the image zooms (Figure 2).
I want to pan the camera such that the object between the 2 fingers stays between the 2 fingers even after the zooming the image (Figure 3).
I used the code below, but the object still drifts as the image zooms in/out.
How should I calculate the amount of shift that needs to be applied to the camera?
Thanks
Code to calculate the amount of shift that needs to be applied to the camera
handleTwoFingerTouchMove( p3_inScreenCoord) {
// normalize the screen coord to be in the range of [-1, 1]
// (See method1 in https://stackoverflow.com/questions/13542175/three-js-ray-intersect-fails-by-adding-div/)
let point2dNormalizedX = ( ( p3_inScreenCoord.x - windowOffset.left ) / windowWidth) * 2 - 1;
let point2dNormalizedY = -( ( p3_inScreenCoord.y - windowOffset.top ) / windowHeight) * 2 + 1;
// calc p3 before zoom (in world coords)
let p3_beforeZoom = new THREE_Vector3( point2dNormalizedX, point2dNormalizedY, -1 ).unproject( this.camera );
// Apply zoom
this.dollyInOut( this.getZoomScale(), true );
// calc p3 after zoom (in world coords)
let p3_afterZoom = new THREE_Vector3( point2dNormalizedX, point2dNormalizedY, -1 ).unproject( this.camera );
// calc the required shift in camera position
let deltaX = p3_afterZoom.x - p3_beforeZoom.x;
let deltaZ = p3_afterZoom.z - p3_beforeZoom.z;
// shift in camera position
this.pan( deltaX, deltaZ );
};
I was able to solve my problem. Here is my solution in the hope that it helps others.
When first applying a 2-finger touch (i.e. on touchstart event), the code computes:
the world-coordinate of the object pointed at (e.g. the cat's eye on the right), when starting two-finger touch
centerPoint3d_inWorldCoord0 (Vector3)
the screen-coordinate anchor for zooming via two-finger touch
centerPoint2dBetweenTwoFingerTouch_inScreenCoordNormalized (Vector2)
While zooming in/out via two-finger pinch in/out (on touchmove event)
in the event listener function, immediately after the applying the zoom,
I call the following code:
// Calculate centerPoint3d_inWorldCoord2, which is the new world-coordinate, for
// centerPoint2dBetweenTwoFingerTouch_inScreenCoordNormalized given the new zoom setting.
let centerPoint3d_inWorldCoord2 = new THREE_Vector3( centerPoint2dBetweenTwoFingerTouch_inScreenCoordNormalized.x,
centerPoint2dBetweenTwoFingerTouch_inScreenCoordNormalized.y,
-1 ).unproject( camera );
// compute the shift in world-coordinate between the new vs the original world-coordinate
let delta_inWorldCoords = new THREE_Vector2(centerPoint3d_inWorldCoord2.x - centerPoint3d_inWorldCoord0.x,
centerPoint3d_inWorldCoord2.z - centerPoint3d_inWorldCoord0.z);
// pan the camera, to compensate the shift
pan_usingWorldCoords( delta_inWorldCoords );
The function pan_usingWorldCoords shifts the camera in the x axis (panLeft), and then in the y axis (panUp)
pan_usingWorldCoords( delta_inWorldCoord ) {
panLeft( delta_inWorldCoord.x );
panUp( delta_inWorldCoord.y );
};
The functions panLeft, panUp are similar to the functions that are used in three.js-r114/examples/jsm/controls/OrbitControls.js
Initially the object pointed at was drifting from being between the 2 fingers, as the image zoomed in/out.
I added this.camera.updateProjectionMatrix() at the end of each function.
This updates the projection matrix at the end of panLeft, before using it again in panUp.
With the code above and after updating the projection matrix at the end of panLeft, panUp, the object pointed at (e.g. the eye on the right), when starting two-finger touch, is kept between the 2 fingers, while zooming via two-finger pinch.
I hopefully have a simple problem I can't get an answer to.
I have three js geometric spheres which move in a box. I place this box at the centre of the scene. The mechanics of how the spheres stay in the box is irrelevant. What is important is the spheres move about the origin (0,0) and the canvas always fills the page.
I want to draw a line from the moving spheres to a div or img element on the page. To do this I would assume I have to transform the css coordinates to three js coordinates. I found something I thought did something like this (Note: Over use of somethings to signify I am probably mistaken)
I can add a html element to the same scene/camera as webgl renderer but obviously using a different renderer but I am unsure how to proceed from there?
Basically I want to know:
How should I change the size of the div preserving aspect ratio if need be?
In essence I want the div or element to fill screen at some camera depth.
How to place the div at the centre of the scene by default?
Mines seems to be shifted 1000 in z direction but this might be the size of the div(img) which I have to bring into view.
How to draw a line between the webgl sphere and html div/img?
thanks in advance!
Unfortunately you have asked 3 questions, it is tricky to address them all at once.
I will explain how to position DIV element on top of some 3D object. My example would be a tooltip that appears when you hover the object by mouse: http://jsfiddle.net/mmalex/ycnh0wze/
So let's get started,
First of all you need to subscribe mouse events and convert 2D coordinates of a mouse to relative coordinates on the viewport. Very well explained you will find it here: Get mouse clicked point's 3D coordinate in three.js
Having 2D coordinates, raycast the object. These steps are quite trivial, but for completeness I provide the code chunk.
var raycaster = new THREE.Raycaster();
function handleManipulationUpdate() {
// cleanup previous results, mouse moved and they're obsolete now
latestMouseIntersection = undefined;
hoveredObj = undefined;
raycaster.setFromCamera(mouse, camera);
{
var intersects = raycaster.intersectObjects(tooltipEnabledObjects);
if (intersects.length > 0) {
// keep point in 3D for next steps
latestMouseIntersection = intersects[0].point;
// remember what object was hovered, as we will need to extract tooltip text from it
hoveredObj = intersects[0].object;
}
}
... // do anything else
//with some conditions it may show or hide tooltip
showTooltip();
}
// Following two functions will convert mouse coordinates
// from screen to three.js system (where [0,0] is in the middle of the screen)
function updateMouseCoords(event, coordsObj) {
coordsObj.x = ((event.clientX - renderer.domElement.offsetLeft + 0.5) / window.innerWidth) * 2 - 1;
coordsObj.y = -((event.clientY - renderer.domElement.offsetTop + 0.5) / window.innerHeight) * 2 + 1;
}
function onMouseMove(event) {
updateMouseCoords(event, mouse);
handleManipulationUpdate();
}
window.addEventListener('mousemove', onMouseMove, false);
And finally see the most important part, DIV element placement. To understand the code it is essential to get convenient with Vector3.project method.
The sequence of calculations is as follows:
Get 2D mouse coordinates,
Raycast object and remember 3D coordinate of intersection (if any),
Project 3D coordinate back into 2D (this step may seem redundant here, but what if you want to trigger object tooltip programmatically? You won't have mouse coordinates)
Mess around to place DIV centered above 2D point, with nice margin.
// This will move tooltip to the current mouse position and show it by timer.
function showTooltip() {
var divElement = $("#tooltip");
//element found and mouse hovers some object?
if (divElement && latestMouseIntersection) {
//hide until tooltip is ready (prevents some visual artifacts)
divElement.css({
display: "block",
opacity: 0.0
});
//!!! === IMPORTANT ===
// DIV element is positioned here
var canvasHalfWidth = renderer.domElement.offsetWidth / 2;
var canvasHalfHeight = renderer.domElement.offsetHeight / 2;
var tooltipPosition = latestMouseProjection.clone().project(camera);
tooltipPosition.x = (tooltipPosition.x * canvasHalfWidth) + canvasHalfWidth + renderer.domElement.offsetLeft;
tooltipPosition.y = -(tooltipPosition.y * canvasHalfHeight) + canvasHalfHeight + renderer.domElement.offsetTop;
var tootipWidth = divElement[0].offsetWidth;
var tootipHeight = divElement[0].offsetHeight;
divElement.css({
left: `${tooltipPosition.x - tootipWidth/2}px`,
top: `${tooltipPosition.y - tootipHeight - 5}px`
});
//get text from hovered object (we store it in .userData)
divElement.text(hoveredObj.userData.tooltipText);
divElement.css({
opacity: 1.0
});
}
}
How can I attach a worldspace UI onto my car object? I'm using the UI Image rotational fill for a speedometer, how would I attach this image to my car? I've made it work somewhat by setting the position and rotation to that of the car, but it moves from it's position when turning, while still being rotated correctly. Help?
Code:
Vector3 basePos = relevantParent.transform.position;
Vector3 tweakedPos = basePos;
tweakedPos.x = tweakedPos.x + xOffset;
tweakedPos.y = tweakedPos.y + yOffset;
tweakedPos.z = tweakedPos.z + zOffset;
transform.position = tweakedPos;
transform.localRotation = relevantParent.transform.localRotation;
https://docs.unity3d.com/ScriptReference/RectTransform.html
RectTransform, which a world space canvas has as to know its position, rotation and so on, inherits from Transform.
Transform can set its parent object.
So grab your RectTransform from the WorldSpace canvas and set it's parent to be your cars transform.
RectTransform rectTransform;
rectTransform.parent = transform; //transform of your car.
I have a GameObject with Animator and looped animation clip.
This animation changes X coordinate from 0 to 10 and back.
I need to add another animation to the first one that increases GameObject's scale and changes its color to red simultaneously.
After scale and color change GameObject keeps these parameters and continues to move according to the first animation clip.
The only way I managed to work it around is writing a custom script with couroutine:
IEnumerator Animate()
{
float scaleDelta = 0.2f;
float colorDelta = 0.02f;
for (int i = 0; i < 50; i++)
{
spriteRenderer.color = new Color(
spriteRenderer.color.r,
spriteRenderer.color.g - colorDelta,
spriteRenderer.color.b - colorDelta);
transform.localScale = new Vector3(
transform.localScale.x + scaleDelta,
transform.localScale.y + scaleDelta,
transform.localScale.z);
yield return new WaitForSeconds(0.02f);
}
}
This works for linear interpolation, but requires to write additional code and write even more code for non-linear transformations.
How can I achieve the same result with Mecanim?
Sample project link: https://drive.google.com/file/d/0B8QGeF3SuAgTU0JWNGd2RnpUU00/view?usp=sharing
I want to rotate circle around the center of the canvas.
I was trying to do it like in this tutorial: http://www.html5canvastutorials.com/kineticjs/html5-canvas-kineticjs-rotation-animation-tutorial/ (redRect - like this rectangle) but when I set offset, my circle is shifted from its original position.
How can I rotate my circle to make it orbit araund center of canvas without using offset?
You can use "old-fashioned" trigonometry:
Demo: http://jsfiddle.net/m1erickson/ZdZR4/
You can use javascript's requestAnimationFrame to drive the animation (or Kinetics internal animation if you prefer):
function animate(){
// request a new animation frame
requestAnimationFrame(animate);
// change the angle of rotation (in radians)
rotation+=Math.PI/180;
// use trigonometry to calculate the circle's new X/Y
var newX=cx+radius*Math.cos(rotation);
var newY=cy+radius*Math.sin(rotation);
// use circle.setPosition to move the circle
// into its new location
circle1.setPosition(newX,newY);
// redraw the layer so the changes are displayed
layer.draw();
}