Part of the application I'm writing requires that I handle multi-touch interaction to handle things like panning and zooming. After messing around with ways to implement it, it got me thinking... is there a way to implement panning and zooming with a constant amount of memory, regardless of how many touches there are on the screen?
How will zoom and pan function?
Input is in the form of one changed touch. Every time a touch is moved, the zoom and pan must be corrected. The information provided with this event is the newPosition and the oldPosition of the touch that moved.
...additionally, a touch can be added or removed. In that event, there is just a position.
For panning, the screen will pan just as much as the average change in position of all the touches. So, if one touch moves by (15,5), and there are 5 touches (4 of which stayed put), then the screen will pan by (3,1)
For zooming, the screen will zoom proportional to the change in average distance of each touch from the center (the average position of all the touches).
What have I tried?
So far, I've decided that I need to keep track of the numberOfTouches, the center, or average position of all the touches, and the averageDistance from the center.
Panning is easy. Whenever a touch moves by changeInPosition, pan by changeInPosition/numberOfTouches. Easy.
I haven't figured out zooming. I think I need to keep track of the center, so I know what point to zoom in on. I also think I need to keep track of the averageDistance, so I know how much to zoom. But... I can't figure out how to maintain averageDistance. It's easiy to keep track of if the center doesn't move, but how do I find the new averageDistance if the center does move?
So, the question is, given that one input, that one changed point, that one new touch, or that one removed touch... is there a way, without storing that touch in an array (AKA: with constant memory), to implement zoom and pan?
Am I on the right track? Can it be done with constant memory? Or must all the touches be tracked?
Just to give you a counter example.
Let's assume such a method exists with constant memory; and you got a series of 100 touches, all of them at (10,10) with center at (15,15) therefore the average distance from the center is 5.
You now execute a pan such that the new center is (10,10). Actually, if you had the whole history of touches, the new average distance from the new center would be now 0, but because you don't have the touch history you have no way how to update the average distance, it could be anything. Maybe the previous 100 touches where equally distributed on your space, or all in one half, or as in the example I chose, all at the same location.
So I would say there is no constant memory approach. What you can do is maintain a history window of the last N touches and recalculate the average distance for them each time. You can keep the zoom amount at the first touch in history (N) and apply the rest of the touches' zoom. On each new touch, update the initial zoom amount to the next touch (N-1), and discard the oldest touch (N) before inserting the new touch(1).
Not sure if Pythagoras would agree with me, but maybe this will spark a decent answer:
center.x = 0
center.y = 0
for each touch position 'p':
center.x = center.x + p.x
center.y = center.y + p.y
averageDistance = sqrt((center.x * center.x) + (center.y * center.y))
averageDistance = averageDistance / (2 * (NumberOfTouches - 1))
center.x = center.x / numberOfTouches
center.y = center.y / numberOfTouches
BTW, if you allow center to move while zooming this will cause a pan-and-zoom.
Related
I'm trying to do something similar to this example, except instead of having the snow flakes flutter about in all directions I'm trying to animate these sprites in only one direction, like having the snow flakes fall to the ground.
The example above was able to load multiple sprites into one geometry since it can vary the rotations of the points object:
particles.rotation.x = Math.random() * 6;
particles.rotation.y = Math.random() * 6;
particles.rotation.z = Math.random() * 6;
However, this won't work if you're animating all the points in one direction. In this case, would I have to create a new geometry for each sprite, or is there a more efficient way to do this using just one geometry?
There are several options. Instead of rotating randomly, you could:
Decrease the y position on each frame with particles.position.y -= 0.01;. When it crosses a certain threshold (For example: y <= -100), move them back up to the origin (y = 100). You'll have to stagger a few Sprite objects so you don't notice the jump.
Rotate along the x-axis, so the spinning motion makes them go down when in front of the camera.
Since the snowflakes will spin up on the opposite side, you could use some fog to hide the far side, and give it a more wintry feel.
Animating via custom shaders, although this is much more complex if you don't know GLSL shader code.
Currently, I'm taking each corner of my object's bounding box and converting it to Normalized Device Coordinates (NDC) and I keep track of the maximum and minimum NDC. I then calculate the middle of the NDC, find it in the world and have my camera look at it.
<Determine max and minimum NDCs>
centerX = (maxX + minX) / 2;
centerY = (maxY + minY) / 2;
point.set(centerX, centerY, 0);
projector.unprojectVector(point, camera);
direction = point.sub(camera.position).normalize();
point = camera.position.clone().add(direction.multiplyScalar(distance));
camera.lookAt(point);
camera.updateMatrixWorld();
This is an approximate method correct? I have seen it suggested in a few places. I ask because every time I center my object the min and max NDCs should be equal when their are calculated again (before any other change is made) but they are not. I get close but not equal numbers (ignoring the negative sign) and as I step closer and closer the 'error' between the numbers grows bigger and bigger. IE the error for the first few centers are: 0.0022566539084770687, 0.00541687811360958, 0.011035676399427596, 0.025670088917273515, 0.06396864345885889, and so on.
Is there a step I'm missing that would cause this?
I'm using this code as part of a while loop to maximize and center the object on screen. (I'm programing it so that the user can enter a heading an elevation and the camera will be positioned so that it's viewing the object at that heading and elevation. After a few weeks I've determined that (for now) it's easier to do it this way.)
However, this seems to start falling apart the closer I move the camera to my object. For example, after a few iterations my max X NDC is 0.9989318709122867 and my min X NDC is -0.9552042384799428. When I look at the calculated point though, I look too far right and on my next iteration my max X NDC is 0.9420058636660581 and my min X NDC is 1.0128126740876888.
Your approach to this problem is incorrect. Rather than thinking about this in terms of screen coordinates, think about it terms of the scene.
You need to work out how much the camera needs to move so that a ray from it hits the centre of the object. Imagine you are standing in a field and opposite you are two people Alex and Burt, Burt is standing 2 meters to the right of Alex. You are currently looking directly at Alex but want to look at Burt without turning. If you know the distance and direction between them, 2 meters and to the right. You merely need to move that distance and direction, i.e. right and 2 meters.
In a mathematical context you need to do the following:
Get the centre of the object you are focusing on in 3d space, and then project a plane parallel to your camera, i.e. a tangent to the direction the camera is facing, which sits on that point.
Next from your camera raycast to the plane in the direction the camera is facing, the resultant difference between the centre point of the object and the point you hit the plane from the camera is the amount you need to move the camera. This should work irrespective of the direction or position of the camera and object.
You are playing the what came first problem. The chicken or the egg. Every time you change the camera attributes you are effectively changing where your object is projected in NDC space. So even though you think you are getting close, you will never get there.
Look at the problem from a different angle. Place your camera somewhere and try to make it as canonical as possible (ie give it a 1 aspect ratio) and place your object around the cameras z-axis. Is this not possible?
Imagine a camera aimed at a sphere that is free to rotate around it's centre.
Imagine that the user can rotate this sphere by reaching out from the camera to touch the closest point on this sphere and flick it round.
This is what I'm trying to implement in iOS.
OpenGl ES on Iphone - Displaying and rotating 3D objects <-- in this linked question I'm trying to figure out a suitable framework.
I've since opted for http://nineveh.gl/ (it is still in beta, but works fine with a little nudging) and got its rotation demo working.
Everytime the user's finger moves on the touchscreen, the code catches that instantaneous vector, which will get thrown in to the overall velocity vector every update (# 60 Hz):
- (void) drawView
{
static CGPoint v = { 0.0, 0.0 };
v.x += _vel.x;
v.y += _vel.y;
// resetting to 0 makes sure each touch movement only gets fed in once
_vel.x = _vel.y = 0.0;
float decay = 0.93;
v.x *= decay;
v.y *= decay;
// this is wrong...
[_mesh rotateRelativeToX: v.y
toY: v.x
toZ: 0.0 ];
[_camera drawCamera];
}
this resultant force should then be applied to the mesh.
what I have above initially seems to work. if I swipe horizontally it works perfectly. similarly vertically. but it is only when I start to combine the rotations that it goes out of sync. say I rotate 10° horizontally and then 10° vertically. now it is not responding properly.
can anyone elucidate the problem?
from reading up, I suspect a solution will involve quaternions...
can anyone suggest how I can solve this? This is teh API reference I have to play with: http://nineveh.gl/docs/Classes/NGLObject3D.html
I notice that there is a rotationSpace property that may come in handy, and also a rotateRelativeWithQuaternion method.
I'm hoping that maybe someone out there who is familiar with this problem and can see how to wield this API to slay it.
_cube1.rotationSpace = NGLRotationSpaceWorld;
does the trick!
Resources:
http://www.youtube.com/watch?v=t5DzKP6y0nU&list=UUJ0dpXJSjM9rpDtx6ZdPXHw
http://nineveh.gl/docs/Classes/NGLObject3D.html
http://nineveh.gl/docs/Classes/NGLObject3D.html#//api/name/rotationSpace
We are developing a real time strategy varient for WP7. At the momment, we require some direction/instruction on how to build an effective camera system. In other words, we would like a camera that can pan around a flat surface (2d or 3d level map). We have been experimenting with a 2d tile map, while our unit/characters are all 3d models.
At first glance, it appears we need to figure out how to calculate a bounding box around the camera and its entire view perspective. Or, restrict the movement of the camera to what it can see, to the bounds of a 2d map.
Any help would be greatly appreciated!!
Cheers
If you're doing true 2D scrolling, it's pretty simple:
Scroll.X must be between 0 and level.width - screen.width
Scroll.Y must be between 0 and level.height - screen.height
(use MathHelper.Clamp to help you with this)
As for 3D it's a little trickier but almost the same principle.
All you really need is to define TWO Vector3 points, one is the lower left back corner and the other the upper right front (or you could do upper left front / lower right back, etc., up to you). These would be your bounding values.
The first one you could define as a readonly with just constant values that you tweak the camera bounds exactly as you wish for that corner. There IS a way of computing this, but honestly I prefer to have more control so I typically choose the route of value tweaking instead.
The second one you could start off with a "base" that you could manually tweak (or compute) just like before but this time you have to add the map width and length (to X and Z) so you know the true bounds depending on the map you have loaded.
Once you have these values, clamp them just as before:
//pans the camera but caps at bounds
public void ScrollByCheckBounds(Vector3 scroll, Vector3 bottomLeftFront, Vector3 topRightBack)
{
Vector3 newScroll = Scroll + scroll;
//clamp each dimension
newScroll.X = MathHelper.Clamp(newScroll.X, topRightBack.X, bottomLeftFront.X);
newScroll.Y = MathHelper.Clamp(newScroll.Y, topRightBack.Y, bottomLeftFront.Y);
newScroll.Z = MathHelper.Clamp(newScroll.Z, bottomLeftFront.Z, topRightBack.Z);
Scroll = newScroll;
}
I want to accomplish the following scenario. I have a UIElement with a CompositeTransform which I want to drag around the screen. Plus, when I’m tapping on it, I want it to rotate by 90 degrees. So,
I’m handling
Tap
ManipulationStarted
ManipulationDelta -> there I’m increasing Translate.X and Y by e.DeltaManipulation.Translation.X and Y
ManipulationCompleted
When the CompositeTransform.Rotation is 0, everything works fine. However, when it’s > 0 (e.g. 90), the e.DeltaManipulation.Translation gives me values relative to the rotation of the object! So, I’m trying to move the UIElement on the right of the screen but it moves up…
Any hints?
I have the position values (Top & Left) bound to the Canvas and the rotation value bound to a rotationtransform angle. During my ManipulationDelta event, I use these two lines:
piece.Left = piece.Left + (Math.Cos(piece.Radians)*e.DeltaManipulation.Translation.X) - (Math.Sin(piece.Radians) * e.DeltaManipulation.Translation.Y);
piece.Top = piece.Top + (Math.Cos(piece.Radians) * e.DeltaManipulation.Translation.Y) + (Math.Sin(piece.Radians) * e.DeltaManipulation.Translation.X);
I should mention that I only rotate by 90 degrees at a time. I think this wouldn't work if you have arbitrary angles. But the sine and cosine functions will get you there with a little editing if you're doing arbitrary angles.
Plus, you said you wanted a hint...