I am making analog clock in unity3d. I am moving hands with finger gesture. I have code for moving the hands:
void OnMouseDrag(){
mouseClickPos = Camera.main.ScreenToWorldPoint(Input.mousePosition);
Vector3 dir = mouseClickPos - transform.position;
float angle = Mathf.Atan2(dir.y,dir.x) * Mathf.Rad2Deg;
angle = Mathf.Round(angle/6f)*6f;
hand1.transform.rotation = Quaternion.AngleAxis(angle, Vector3.forward);
}
The problem is that I want to make hour hand move angle/10 at the same time when I move minute hand(if minute hand is moving for 6 degrees, hour hand should move for 0.6 degrees), but when I make full circle with minute hand, position of hour hand resets. It should work for bot sides.
Can someone help me please?
Related
I'm a bit stuck implementing rotation of the avatar (Ready Player Me) that follows user's perspective camera controlled by PointerLockControls.
Avatar has the following hierarchy:
-bone Neck (parent)
-bone Head (child)
The goal is to take the current camera rotation and split it between the Head and Neck.
I want the head to rotate not more than PI/2 (or even PI/4) each direction (XYZ). The rest of the rotation should be managed by the Neck. It should be fine even if the Neck will be static for X and Z and rotate only around Y.
If I do a simple subtraction following the formula (I skip the logic for signs):
neckRotationY = cameraRotationY < PI/2 ? 0 : cameraRotationY - (cameraRotation % PI/2);
headRotationY = cameraRotation % PI/2;
everything works until camera starts looking backwards. Instead of rotating around Y to look behind at 2PI, the camera is rotated around X and Z, so the simple formula doesn't work.
Could smdy get me any idea of how can I separate camera rotation into neck and head rotations?
Thanks!
Part of the application I'm writing requires that I handle multi-touch interaction to handle things like panning and zooming. After messing around with ways to implement it, it got me thinking... is there a way to implement panning and zooming with a constant amount of memory, regardless of how many touches there are on the screen?
How will zoom and pan function?
Input is in the form of one changed touch. Every time a touch is moved, the zoom and pan must be corrected. The information provided with this event is the newPosition and the oldPosition of the touch that moved.
...additionally, a touch can be added or removed. In that event, there is just a position.
For panning, the screen will pan just as much as the average change in position of all the touches. So, if one touch moves by (15,5), and there are 5 touches (4 of which stayed put), then the screen will pan by (3,1)
For zooming, the screen will zoom proportional to the change in average distance of each touch from the center (the average position of all the touches).
What have I tried?
So far, I've decided that I need to keep track of the numberOfTouches, the center, or average position of all the touches, and the averageDistance from the center.
Panning is easy. Whenever a touch moves by changeInPosition, pan by changeInPosition/numberOfTouches. Easy.
I haven't figured out zooming. I think I need to keep track of the center, so I know what point to zoom in on. I also think I need to keep track of the averageDistance, so I know how much to zoom. But... I can't figure out how to maintain averageDistance. It's easiy to keep track of if the center doesn't move, but how do I find the new averageDistance if the center does move?
So, the question is, given that one input, that one changed point, that one new touch, or that one removed touch... is there a way, without storing that touch in an array (AKA: with constant memory), to implement zoom and pan?
Am I on the right track? Can it be done with constant memory? Or must all the touches be tracked?
Just to give you a counter example.
Let's assume such a method exists with constant memory; and you got a series of 100 touches, all of them at (10,10) with center at (15,15) therefore the average distance from the center is 5.
You now execute a pan such that the new center is (10,10). Actually, if you had the whole history of touches, the new average distance from the new center would be now 0, but because you don't have the touch history you have no way how to update the average distance, it could be anything. Maybe the previous 100 touches where equally distributed on your space, or all in one half, or as in the example I chose, all at the same location.
So I would say there is no constant memory approach. What you can do is maintain a history window of the last N touches and recalculate the average distance for them each time. You can keep the zoom amount at the first touch in history (N) and apply the rest of the touches' zoom. On each new touch, update the initial zoom amount to the next touch (N-1), and discard the oldest touch (N) before inserting the new touch(1).
Not sure if Pythagoras would agree with me, but maybe this will spark a decent answer:
center.x = 0
center.y = 0
for each touch position 'p':
center.x = center.x + p.x
center.y = center.y + p.y
averageDistance = sqrt((center.x * center.x) + (center.y * center.y))
averageDistance = averageDistance / (2 * (NumberOfTouches - 1))
center.x = center.x / numberOfTouches
center.y = center.y / numberOfTouches
BTW, if you allow center to move while zooming this will cause a pan-and-zoom.
Currently, I'm taking each corner of my object's bounding box and converting it to Normalized Device Coordinates (NDC) and I keep track of the maximum and minimum NDC. I then calculate the middle of the NDC, find it in the world and have my camera look at it.
<Determine max and minimum NDCs>
centerX = (maxX + minX) / 2;
centerY = (maxY + minY) / 2;
point.set(centerX, centerY, 0);
projector.unprojectVector(point, camera);
direction = point.sub(camera.position).normalize();
point = camera.position.clone().add(direction.multiplyScalar(distance));
camera.lookAt(point);
camera.updateMatrixWorld();
This is an approximate method correct? I have seen it suggested in a few places. I ask because every time I center my object the min and max NDCs should be equal when their are calculated again (before any other change is made) but they are not. I get close but not equal numbers (ignoring the negative sign) and as I step closer and closer the 'error' between the numbers grows bigger and bigger. IE the error for the first few centers are: 0.0022566539084770687, 0.00541687811360958, 0.011035676399427596, 0.025670088917273515, 0.06396864345885889, and so on.
Is there a step I'm missing that would cause this?
I'm using this code as part of a while loop to maximize and center the object on screen. (I'm programing it so that the user can enter a heading an elevation and the camera will be positioned so that it's viewing the object at that heading and elevation. After a few weeks I've determined that (for now) it's easier to do it this way.)
However, this seems to start falling apart the closer I move the camera to my object. For example, after a few iterations my max X NDC is 0.9989318709122867 and my min X NDC is -0.9552042384799428. When I look at the calculated point though, I look too far right and on my next iteration my max X NDC is 0.9420058636660581 and my min X NDC is 1.0128126740876888.
Your approach to this problem is incorrect. Rather than thinking about this in terms of screen coordinates, think about it terms of the scene.
You need to work out how much the camera needs to move so that a ray from it hits the centre of the object. Imagine you are standing in a field and opposite you are two people Alex and Burt, Burt is standing 2 meters to the right of Alex. You are currently looking directly at Alex but want to look at Burt without turning. If you know the distance and direction between them, 2 meters and to the right. You merely need to move that distance and direction, i.e. right and 2 meters.
In a mathematical context you need to do the following:
Get the centre of the object you are focusing on in 3d space, and then project a plane parallel to your camera, i.e. a tangent to the direction the camera is facing, which sits on that point.
Next from your camera raycast to the plane in the direction the camera is facing, the resultant difference between the centre point of the object and the point you hit the plane from the camera is the amount you need to move the camera. This should work irrespective of the direction or position of the camera and object.
You are playing the what came first problem. The chicken or the egg. Every time you change the camera attributes you are effectively changing where your object is projected in NDC space. So even though you think you are getting close, you will never get there.
Look at the problem from a different angle. Place your camera somewhere and try to make it as canonical as possible (ie give it a 1 aspect ratio) and place your object around the cameras z-axis. Is this not possible?
Imagine a camera aimed at a sphere that is free to rotate around it's centre.
Imagine that the user can rotate this sphere by reaching out from the camera to touch the closest point on this sphere and flick it round.
This is what I'm trying to implement in iOS.
OpenGl ES on Iphone - Displaying and rotating 3D objects <-- in this linked question I'm trying to figure out a suitable framework.
I've since opted for http://nineveh.gl/ (it is still in beta, but works fine with a little nudging) and got its rotation demo working.
Everytime the user's finger moves on the touchscreen, the code catches that instantaneous vector, which will get thrown in to the overall velocity vector every update (# 60 Hz):
- (void) drawView
{
static CGPoint v = { 0.0, 0.0 };
v.x += _vel.x;
v.y += _vel.y;
// resetting to 0 makes sure each touch movement only gets fed in once
_vel.x = _vel.y = 0.0;
float decay = 0.93;
v.x *= decay;
v.y *= decay;
// this is wrong...
[_mesh rotateRelativeToX: v.y
toY: v.x
toZ: 0.0 ];
[_camera drawCamera];
}
this resultant force should then be applied to the mesh.
what I have above initially seems to work. if I swipe horizontally it works perfectly. similarly vertically. but it is only when I start to combine the rotations that it goes out of sync. say I rotate 10° horizontally and then 10° vertically. now it is not responding properly.
can anyone elucidate the problem?
from reading up, I suspect a solution will involve quaternions...
can anyone suggest how I can solve this? This is teh API reference I have to play with: http://nineveh.gl/docs/Classes/NGLObject3D.html
I notice that there is a rotationSpace property that may come in handy, and also a rotateRelativeWithQuaternion method.
I'm hoping that maybe someone out there who is familiar with this problem and can see how to wield this API to slay it.
_cube1.rotationSpace = NGLRotationSpaceWorld;
does the trick!
Resources:
http://www.youtube.com/watch?v=t5DzKP6y0nU&list=UUJ0dpXJSjM9rpDtx6ZdPXHw
http://nineveh.gl/docs/Classes/NGLObject3D.html
http://nineveh.gl/docs/Classes/NGLObject3D.html#//api/name/rotationSpace
I am currently coding a little AR-Game for myself on the iPhone 3GS ;-)
I want to use the accelerometer & compass data for rotating my camera in opengl. The camera has a fixed position and can only rotate due to the accelerometer. The iPhone is initially rotated 90° to have a bigger widescreen ;-) So the axes are switched...
When I hold the iPhone straigt in front of me I get these values:
accel.x = 1
axxel.y = 0
accel.z = 0
When move the iPhone straight to the top of me (over my head) I get these values:
accel.x = 0
accel.y = 0
accel.z = 1
So the values are between :
x: 1 straight ahead and 0 over my head
y: 0 straight ahed and 0 over my head
z: 0 straight ahead and 1 over my head
I want to use x, y, z for my camera world coordinates. E.g. accel.x = 0.5 and accel.z = 0.5
The camera should change the centerX, centerY and centerZ values based on the values I get from the accelerometer.
How can I manage this?
Thanks ;-)
If you're getting X=1 with the phone straight ahead of you, it sounds like you're holding the phone in landscape, with the button to the right, is that correct?
The accelerometer reports gravitational force, so X=1 means that you're going DOWN the X-axis on the screen.
This means that your +X direction (pixels) on the screen is really the +Y direction (gl drawing) in your OpenGL world, which corresponds to a -X accelerometer reading.
In this orientation, moving the phone to your left & right will give +Y and -Y accelerometer readings, respectively. How strong the readings are depend on with how much force you move the phone. If you move slowly, the reading may be indistinguishable from normal noise in the data. If you move at about the speed that the phone would fall if you dropped it, your reading will be close to "1" ("1 G-force" or "1 gravity.") If you move it faster than that, you will get higher readings.
NOTE: if you just turn, then stop moving, the accelerometer reading will revert to 0. Gravity is an measure of acceleration ("the acceleration due to gravity"!), not facing. This is why the compass became important.
If you tilt the phone screen-down or screen-up, the accelerometer's Z-reading will change, as gravity pulls "into" or "away from" the screen. With the screen perfectly vertical (as holding the phone straight in front of your face, any orientation), the accelerometer Z-reading should be 0.
How you translate that orientation data to your camera depends on your app. Just keep in mind that "steering" the camera by moving the phone will be tricky to get right, as you have to account for both acceleration AND deceleration of the user's movements!
To help you visualize this, check out XCode's sample-project "Accelerometer." Just search that word in the documents, and you'll see a sample project you can build and play with to get an idea of what each axis means.
Luck!