Shooter Rotation Calculations - rotation

I am making a simple JavaScript game using Phaser and it is going pretty well. I am currently having a few issues with the rotation of the tank, however.
Issue 1:
When shooting the bullet (left click) the speed is dependent on the distance from the tank to the mouse. The code I am currently using to get the rotation of the tank involves subtracting the mouse x and y from the tank x and y. The bullet receives the velocity from these values. How would I go about keeping the velocity consistent no matter what the distance of the mouse is?
Phaser has a native method for this I did not know about.
Issue 2:
If you click to shoot a bullet and turn quickly, you can see the bullet under the barrel of the tank. This is more a a visual "glitch", but I would like to have the bullet spawn right at the edge of the barrel of the tank. Is there a method to get the correct spawn location based on both the rotation and barrel position?

Related

How can i handle a camera direction parallel to the y-axis my raytracer

I'm working on my raytracer and it seems I can't manage to handle the case where the direction vector of my camera is parallel to the vector (0,1,0).
I think it is linked to my way to compute the vector up and right for camera but I can't manage to find a work around.
Here is how I do it:
cam_up = vector_cross(cam_dir, {0, 1, 0});
camp_right = vector_cross(cam_right, cam_dir);
Can somebody enlighten me?
You have the correct formula for calculation of an orthogonal axis from a single cameraOut vector. However, as has been stated this formula will not account for the camera roll, which could be any direction in the plane perpendicular to the camera direction. This will be apparent when moving a camera across the pole (y-axis) as there will be undesireable behavior (yes it will be correctly aimed, but no doubt the roll won't be desired).
For more information, look into gimbal lock.
The roll itself is not really incorrect, however in reality for this camera transition to be smooth and appear correct (rather than suddenly flip or spin as it's direction becomes 0,1,0), you need to correct any roll incurred. This is a rotation about the cameraOut axis and ideally should be relative to the previous cameraAlong. This means in order to maintain the correct roll (or perceived correct roll) you need to consider the camera POSE (position and orientation) from the previous frame and ensure the roll is mitigated. Of course, if the camera doesn't move (i.e. your rendering a frame with a static camera position) you do not have a previous camera state so the position cannot be calculated and instead must be explicitly defined as part of the scene definition.
Personally I store an entire orthogonal axis for a camera so the orientation and roll is always clearly defined. This is only for completeness, to be honest you don't need to store the entire axis, 2 vectors cameraOut and cameraAlong (the third one being cameraUp) are enough. cameraAlong is dependant on the handed-ness of your coordinate system (e.g. for initial camera position say position (0,0,0) in left hand coordinate system, the cameraAlong direction will be in the right direction in relation to the viewer, for right hand system the cameraAlong would be the other way around. The cameraUp and cameraOut would are the same in both coordinate systems).
Hope this helps.
P.S This isn't ray tracing specific and the same principles apply for OpenGL/DirectX or any 3D representation.

Programmatic correction of camera tilt in a positioning system

A quick introduction:
We're developing a positioning system that works the following way. Our camera is situated on a robot and is pointed upwards (looking at the ceiling). On the ceiling we have something like landmarks, thanks to whom we can compute the position of the robot. It looks like this:
Our problem:
The camera is tilted a bit (0-4 degrees I think), because the surface of the robot is not perfectly even. That means, when the robot turns around but stays at the same coordinates, the camera looks at a different position on the ceiling and therefore our positioning program yields a different position of the robot, even though it only turned around and wasn't moved a bit.
Our current (hardcoded) solution:
We've taken some test photos from the camera, turning it around the lens axis. From the pictures we've deduced that it's tilted ca. 4 degrees in the "up direction" of the picture. Using some simple geometrical transformations we've managed to reduce the tilt effect and find the real camera position. On the following pictures the grey dot marks the center of the picture, the black dot is the real place on the ceiling under which the camera is situated. The black dot was transformed from the grey dot (its position was computed correcting the grey dot position). As you can easily notice, the grey dots form a circle on the ceiling and the black dot is the center of this circle.
The problem with our solution:
Our approach is completely unportable. If we moved the camera to a new robot, the angle and direction of tilt would have to be completely recalibrated. Therefore we wanted to leave the calibration phase to the user, that would demand takings some pictures, assessing the tilt parameters by him and then setting them in the program. My question to you is: can you think of any better (more automatic) solution to computing the tilt parameters or correcting the tilt on the pictures?
Nice work. To have an automatic calibration is a nice challenge.
An idea would be to use the parallel lines from the roof tiles:
If the camera is perfectly level, then all lines will be parallel in the picture too.
If the camera is tilted, then all lines will be secant (they intersect in the vanishing point).
Now, this is probably very hard to implement. With the camera you're using, distortion needs to be corrected first so that lines are indeed straight.
Your practical approach is probably simpler and more robust. As you describe it, it seems it can be automated to become user friendly. Make the robot turn on itself and identify pragmatically which point remains at the same place in the picture.

How to rotate circle in response to collision?

I am in the process of developing a very simple physics engine. The only non-static objects in it will be circles and the only collision detection I will be performing is between circles and line pieces.
For the purpose I am utilizing the principals described in Advanced Character Physics. That is, I do integration by using a simple Verlet integrator. I perform collision detection and response simply by calculating the distance between the circles and the line pieces and in case that the distance is less than the cirles radius I project the circle out of the line piece.
This works very well and the result is a practically perfect moving circle. The current state of the engine can be seen here: http://jsfiddle.net/8K4Wj/. This however, also shows the one major problem I am facing: The circle does not rotate at all.
As far as I can figure out there is three different collision cases that will have to be dealt with seperately:
When the circle is colliding with a line vertex and is not rolling along the line.
When the circle has just hit or rolled of a line. Then the exact point of impact will have to be calculated (how?) and the circle is rotated according to the distance between the impact position and the projected position.
When the circle is rolling along a line. Then is it simply rotated according to the distance traveled since last frame.
Here is the closest I have got to solving the problem: http://jsfiddle.net/vYjzt/. But as the demo shows it doesn't handle the edge cases probably.
I have searched for a solution online but I can not find any material that deals with the given problem specifically (as I said the physics engine is relatively simple and I do not want to bother with complex physic simulation concepts).
What looks wrong in your demo is that you're not considering angular moment and energy when determining the motion.
For the easy case, when the wheel is dropping to the floor in your demo, it stops spinning while in free fall. Angular momentum should keep it going.
A more complicated situation is when the wheel finally lands on the floor, it moves with the same horizontal velocity it had before hitting floor. This would be correct if it wasn't rolling but since it is rolling, some of the kinetic energy will have to go into the spinning motion, and this should slow it down. As a more clear example of this, consider the opposite case where the wheel is spinning quickly but has no linear momentum. When this wheel is set on the floor, it should take off and the spinning should slow. Also, for example, as the wheel rolls down a hill, it accelerates more slowly because the energy needs to go into both linear and circular motion.
It's not too hard to do, but to show a rolling object in a way that looks intuitively correct, I think you'll need to consider the kinetic energy and angular momentum associated with rolling. By "not too hard", I mean that all of your equations will essentially but twice as long, with one term for linear motion and another for angular. I won't recite all of the equations, it's basically just the chapter in rotational motion from any physics text.
(Nice demo, btw!)

Write a Circular UIGestureRecognizer

I'm looking to create a jog wheel in an iPhone / iPad application. I know that you can subclass UIGestureRecognizer to write your own recognizers. Does anyone know how (mainly the maths behind it) to create one that would detect a circular movement, perhaps in combination with a pan gesture?
Thanks
this question isn't easy. I spend some time thinking of a possible solution:
I think what you need are some key properties you have to set:
The center of the circular movement (in this case no problem, because you know the center of the jog wheel)
A corridor in which the movement should happen.
so you need the inner radius and the outer radius.
Now you have something like this (unfortunately I haven't got enough reputation so only the link: http://img17.imageshack.us/img17/4416/bildschirmfoto20100721u.png
Now the maths behind this starts:
First of all you arrange the corridor in four quarters:
From 0° to 90°
From 90° to 180°
From 180° to 270°
From 270° to 360°
For each quarter you have to figure out when the finger is moving (let's say that the 0°-line is from the center point straight to the top):
if the finger is in the first quarter you know if the x changes to the left that the rotation must be anti-clockwise. If the x changes to the right the rotation must be clockwise.
Apply this logic for all quarters. Now you know if the jog wheel is moved clockwise or anti-clockwise. You have to make sure, that the finger is never leaving the corridor (if you test this logic and the movement stops because of leaving the corridor, make the corridor bigger - Thanks to CrystalSkull for his comment: Use 44px as a minimum width for the corridor to apply to the Human Interface Guidelines).
Sumary
So now you can conclude that you need a center point and a corridor the finger can move in.
You have to figure out in which quarter the finger is in and find out (using the x-value) if the rotation is clockwise or anti-clockwise.
I hope this helps you a little bit.

Drag+Drop with physical behaviour

I'd like to implement a dragging feature where users can drag objects around the workspace. That of course is the easy bit. The hard bit is to try and make it a physically correct drag which incorporates rotation due to torque moments (imagine dragging a book around on a table using only one finger, how does it rotate as you drag?).
Does anyone know where I can find explanations on how to code this (2D only, rectangles only, no friction required)?
Much obliged,
David
EDIT:
I wrote a small app (with clearly erroneous behaviour) that I hope will convey what I'm looking for much better than words could. C# (VS 2008) source and compiled exe here
EDIT 2:
Adjusted the example project to give acceptable behaviour. New source (and compiled exe) is available here. Written in C# 2008. I provide this code free of any copyright, feel free to use/modify/whatever. No need to inform me or mention me.
Torque is just the applied force projected perpendicular to a vector between the point where the force is applied and the centroid of the object. So, if you pull perpendicular to the diameter, the torque is equal to the applied force. If you pull directly away from the centroid, the torque is zero.
You'd typically want to do this by modeling a spring connecting the original mouse-down point to the current position of the mouse (in object-local coordinates). Using a spring and some friction smooths out the motions of the mouse a bit.
I've heard good things about Chipmunk as a 2D physics package:
http://code.google.com/p/chipmunk-physics/
Okay, It's getting late, and I need to sleep. But here are some starting points. You can either do all the calculations in one coordinate space, or you can define a coordinate space per object. In most animation systems, people use coordinate spaces per object, and use transformation matrices to convert, because it makes the math easier.
The basic sequence of calculations is:
On mouse-down, you do your hit-test,
and store the coordinates of the
event (in the object coordinate
space).
When the mouse moves, you create a
vector representing the distance
moved.
The force exterted by the spring is k * M, where M is the amount of distance between that initial mouse-down point from step 1, and the current mouse position. k is the spring constant of the spring.
Project that vector onto two direction vectors, starting from the initial mouse-down point. One direction is towards the center of the object, the other is 90 degrees from that.
The force projected towards the center of the object will move it towards the mouse cursor, and the other force is the torque around the axis. How much the object accelerates is dependent on its mass, and the rotational acceleration is dependent on angular momentum.
The friction and viscosity of the medium the object is moving in causes drag, which simply reduces the motion of the object over time.
Or, maybe you just want to fake it. In that case, just store the (x,y) location of the rectangle, and its current rotation, phi. Then, do this:
Capture the mouse-down location in world coordinates
When the mouse moves, move the box according to the change in mouse position
Calculate the angle between the mouse and the center of the object (atan2 is useful here), and between the center of the object and the initial mouse-down point. Add the difference between the two angles to the rotation of the rectangle.
This would seem to be a basic physics problem.
You would need to know where the click, and that will tell you if they are pushing or pulling, so, though you are doing this in 2D, your calculations will need to be in 3D, and your awareness of where they clicked will be in 3D.
Each item will have properties, such as mass, and perhaps information for air resistance, since the air will help to provide the motion.
You will also need to react differently based on how fast the user is moving the mouse.
So, they may be able to move the 2 ton weight faster than is possible, and you will just need to adapt to that, as the user will not be happy if the object being dragged is slower than the mouse pointer.
Which language?
Here's a bunch of 2d transforms in C

Resources