Windows Phone 7 UIElement manipulation - windows-phone-7

I want to accomplish the following scenario. I have a UIElement with a CompositeTransform which I want to drag around the screen. Plus, when I’m tapping on it, I want it to rotate by 90 degrees. So,
I’m handling
Tap
ManipulationStarted
ManipulationDelta -> there I’m increasing Translate.X and Y by e.DeltaManipulation.Translation.X and Y
ManipulationCompleted
When the CompositeTransform.Rotation is 0, everything works fine. However, when it’s > 0 (e.g. 90), the e.DeltaManipulation.Translation gives me values relative to the rotation of the object! So, I’m trying to move the UIElement on the right of the screen but it moves up…
Any hints?

I have the position values (Top & Left) bound to the Canvas and the rotation value bound to a rotationtransform angle. During my ManipulationDelta event, I use these two lines:
piece.Left = piece.Left + (Math.Cos(piece.Radians)*e.DeltaManipulation.Translation.X) - (Math.Sin(piece.Radians) * e.DeltaManipulation.Translation.Y);
piece.Top = piece.Top + (Math.Cos(piece.Radians) * e.DeltaManipulation.Translation.Y) + (Math.Sin(piece.Radians) * e.DeltaManipulation.Translation.X);
I should mention that I only rotate by 90 degrees at a time. I think this wouldn't work if you have arbitrary angles. But the sine and cosine functions will get you there with a little editing if you're doing arbitrary angles.
Plus, you said you wanted a hint...

Related

How to animate multiple sprites in one direction using one geometry in Three.js?

I'm trying to do something similar to this example, except instead of having the snow flakes flutter about in all directions I'm trying to animate these sprites in only one direction, like having the snow flakes fall to the ground.
The example above was able to load multiple sprites into one geometry since it can vary the rotations of the points object:
particles.rotation.x = Math.random() * 6;
particles.rotation.y = Math.random() * 6;
particles.rotation.z = Math.random() * 6;
However, this won't work if you're animating all the points in one direction. In this case, would I have to create a new geometry for each sprite, or is there a more efficient way to do this using just one geometry?
There are several options. Instead of rotating randomly, you could:
Decrease the y position on each frame with particles.position.y -= 0.01;. When it crosses a certain threshold (For example: y <= -100), move them back up to the origin (y = 100). You'll have to stagger a few Sprite objects so you don't notice the jump.
Rotate along the x-axis, so the spinning motion makes them go down when in front of the camera.
Since the snowflakes will spin up on the opposite side, you could use some fog to hide the far side, and give it a more wintry feel.
Animating via custom shaders, although this is much more complex if you don't know GLSL shader code.

constant amount of memory for multitouch zoom and pan?

Part of the application I'm writing requires that I handle multi-touch interaction to handle things like panning and zooming. After messing around with ways to implement it, it got me thinking... is there a way to implement panning and zooming with a constant amount of memory, regardless of how many touches there are on the screen?
How will zoom and pan function?
Input is in the form of one changed touch. Every time a touch is moved, the zoom and pan must be corrected. The information provided with this event is the newPosition and the oldPosition of the touch that moved.
...additionally, a touch can be added or removed. In that event, there is just a position.
For panning, the screen will pan just as much as the average change in position of all the touches. So, if one touch moves by (15,5), and there are 5 touches (4 of which stayed put), then the screen will pan by (3,1)
For zooming, the screen will zoom proportional to the change in average distance of each touch from the center (the average position of all the touches).
What have I tried?
So far, I've decided that I need to keep track of the numberOfTouches, the center, or average position of all the touches, and the averageDistance from the center.
Panning is easy. Whenever a touch moves by changeInPosition, pan by changeInPosition/numberOfTouches. Easy.
I haven't figured out zooming. I think I need to keep track of the center, so I know what point to zoom in on. I also think I need to keep track of the averageDistance, so I know how much to zoom. But... I can't figure out how to maintain averageDistance. It's easiy to keep track of if the center doesn't move, but how do I find the new averageDistance if the center does move?
So, the question is, given that one input, that one changed point, that one new touch, or that one removed touch... is there a way, without storing that touch in an array (AKA: with constant memory), to implement zoom and pan?
Am I on the right track? Can it be done with constant memory? Or must all the touches be tracked?
Just to give you a counter example.
Let's assume such a method exists with constant memory; and you got a series of 100 touches, all of them at (10,10) with center at (15,15) therefore the average distance from the center is 5.
You now execute a pan such that the new center is (10,10). Actually, if you had the whole history of touches, the new average distance from the new center would be now 0, but because you don't have the touch history you have no way how to update the average distance, it could be anything. Maybe the previous 100 touches where equally distributed on your space, or all in one half, or as in the example I chose, all at the same location.
So I would say there is no constant memory approach. What you can do is maintain a history window of the last N touches and recalculate the average distance for them each time. You can keep the zoom amount at the first touch in history (N) and apply the rest of the touches' zoom. On each new touch, update the initial zoom amount to the next touch (N-1), and discard the oldest touch (N) before inserting the new touch(1).
Not sure if Pythagoras would agree with me, but maybe this will spark a decent answer:
center.x = 0
center.y = 0
for each touch position 'p':
center.x = center.x + p.x
center.y = center.y + p.y
averageDistance = sqrt((center.x * center.x) + (center.y * center.y))
averageDistance = averageDistance / (2 * (NumberOfTouches - 1))
center.x = center.x / numberOfTouches
center.y = center.y / numberOfTouches
BTW, if you allow center to move while zooming this will cause a pan-and-zoom.

Windows Phone XNA: Real Time Strategy Camera

We are developing a real time strategy varient for WP7. At the momment, we require some direction/instruction on how to build an effective camera system. In other words, we would like a camera that can pan around a flat surface (2d or 3d level map). We have been experimenting with a 2d tile map, while our unit/characters are all 3d models.
At first glance, it appears we need to figure out how to calculate a bounding box around the camera and its entire view perspective. Or, restrict the movement of the camera to what it can see, to the bounds of a 2d map.
Any help would be greatly appreciated!!
Cheers
If you're doing true 2D scrolling, it's pretty simple:
Scroll.X must be between 0 and level.width - screen.width
Scroll.Y must be between 0 and level.height - screen.height
(use MathHelper.Clamp to help you with this)
As for 3D it's a little trickier but almost the same principle.
All you really need is to define TWO Vector3 points, one is the lower left back corner and the other the upper right front (or you could do upper left front / lower right back, etc., up to you). These would be your bounding values.
The first one you could define as a readonly with just constant values that you tweak the camera bounds exactly as you wish for that corner. There IS a way of computing this, but honestly I prefer to have more control so I typically choose the route of value tweaking instead.
The second one you could start off with a "base" that you could manually tweak (or compute) just like before but this time you have to add the map width and length (to X and Z) so you know the true bounds depending on the map you have loaded.
Once you have these values, clamp them just as before:
//pans the camera but caps at bounds
public void ScrollByCheckBounds(Vector3 scroll, Vector3 bottomLeftFront, Vector3 topRightBack)
{
Vector3 newScroll = Scroll + scroll;
//clamp each dimension
newScroll.X = MathHelper.Clamp(newScroll.X, topRightBack.X, bottomLeftFront.X);
newScroll.Y = MathHelper.Clamp(newScroll.Y, topRightBack.Y, bottomLeftFront.Y);
newScroll.Z = MathHelper.Clamp(newScroll.Z, bottomLeftFront.Z, topRightBack.Z);
Scroll = newScroll;
}

How to make buttons (Corona SDK) from graphics with arbitrary shape?

I have a profile of a mountain in my game and need Corona to be able to discern between the user pressing (touch event) on the mountain, and pressing on the valley in between the peaks (alpha channel used to create shape). It seems that Corona treats a display object in this sense as a rectangle, thus my need cannot be satisfied by any means I have found.
However, Corona physics functionality allows you to create complex polygons to mimic arbitrary shapes for collision handling, but I have found no similar method for buttons.
Any ideas?
It's not automatic, but here's a solution you can try that involves a little setup and code. Shouldn't be too difficult.
Test the location of the touch in your event listener by inspecting the event.x and event.y parameters. You could make this efficient by creating a table that has values for the left-most x and right-most x value for each strip of, say 10 pixels from the top to the bottom of your object. For example, consider this mountain:
Use the y coordinate of bottom of each light blue rectangle as the index into the table, and load the left x and right y values into that entry, e.g.:
hitTable[120] = {245,260}
hitTable[130] = {230,275}
and so on...
Then, in the touch event listener, force the event.y parameter to one of your table indices, either with a function or just testing to see which it's closest to. Then, using that table entry, see if event.x is between the x coordinates you've specified for that y coordinate. If not, just ignore the touch.
You can even build the table and assign it as a property of the image itself like this:
hitTable = {}
hitTable[120] = {245,260}
hitTable[130] = {230,275}
... and so on, then ...
myMountain.hitTable = hitTable
Once you've done that, you can access the table in the touch event listener as event.target.hitTable.
Could you create the mountain peaks with a 90 degree angle tip. Then if you split the mountain peaks up and rotated them 45 degrees they would then fit into a square shape. Once you exported them each, you import them into Corona and then rotate them back 45 degrees. I havent tested this but im imagining it might work :)

Java3D: rotating object in world coordinates?

I've been searching all evening but can't find the information I'm looking for, or even if it's possible, which is quite distressing ;)
I'm using Java3D and can't figure out how to rotate the camera in world space.
My left/right, and up/down rotation both happen on local space.
Meaning that if I move left and right, everything looks fine.
However if I look 90 degrees down, then look 90 degrees right, everything appears to be on its side.
Currently, I'm doing the following. This will result in the above effects:
TransformGroup cam = universe.getViewingPlatform().getViewPlatformTransform();
Transform3D trfcam = new Transform3D();
cam.getTransform(trfcam);
trfcam.mul(Camera.GetT3D()); //Gets a Transform3D containing how far to rotate left/right and how far to move left/right/forward/back
trfcam.mul(Camera.GetRot()); //Gets a t3d containing how far to rotate up/down
cam.setTransform(trfcam);
Alternatively, one thing I tried was rotating the root, but that rotates around 0, so if I ever move the camera away from 0, it goes bad.
Is there something available on the web that would talk me through how to achieve this kind of thing?
I've tried a lot of different things but just can't seem to get my head around it at all.
I'm familiar with the concept, as I've achieved it in Ogre3D, just not familiar with the law of the land in J3D.
Thanks in advance for replies :)
Store the ammound you have rotated around each axis (x and y), and when you try to rotate around the x axis for example, reverse the the rotation around y, do the rotate around x, then redo the rotation around y.
I'm not sure I understand your second question correctly. Since viewer and model transformations are dual, you can simulate camera moves by transforming the world itself. If you dont want to translate the x and y axis you are rotating around, just add another TransformGroup to the main TransformGroup you are using, and do the transforms in the new one.
Edit: The first solution is quite slow, so you can make a Transform3D out of the 3 transform you have to do:
Say you have rotated around the x axis (Translate3D xrot), and now you need to rotate around y:
Translate3D yrot = new Translate3D();
yrot.rotY(angle);
Translate3D temp = xot;
xrot.mul(yrot); // Dont forget the reverse order. xrot is the previous translate
xrot.mul(yrot); // xrot = xrot * yrot * xrot^-1
temp.transpose(); // Get the reverse transform of the old transform
xrot.mul(temp);
yrot = xrot; // Store it for future rotation around x axis
cam.setTransform(yrot);
It works similar for many transformations you make: reverse the previous done, do the transform, redo the old one. I hope it helps.

Resources