I am using the CORDIC ATAN block in Simulink. I am using this block to calculate the Phase difference.
here is the part of the model that I am using:
I am giving the input a and b as 0, and I was expecting the value of Phase_Signal to be zero as well.
But apparently its not. I am getting Phase_Signal as 1.7277.
Please let me know, if I have configured the CORDIC block not properly.
ATAN Block Parameters:
Thanks
Kiran
Your expectation is just wrong. The point (0,0) has no unique phase. Every value is right.
To help your understanding, visualize a point that has coordinates which you transform from euclidic representation (a,b) into polar coordinates (r, phi). For every point EXCEPT (0,0) you get a unique r and phi for your a and b. But for (0,0) only r is uniquely identified with r = 0. But the angle could be every possible value.
So for the input (0,0) you could get any phase - not even always the same, but once 0, once 1.7 and once 0.5, or whatever (but afair the Xilinx coregen cordic cores are deterministic and stateless, so the result should be always the same, when using them).
Related
My question is as follows.
A single camera (constant in both position and orientation) is applied to monitor the motion of a far-away planar target.
The intrinsic parameters including focal length, sensor resolution, principal point, are known. The lens distortion is not considered.
In the world system, three dimensions about the target (in a plane) are known.
I wonder whether it is enough to determine the camera extrinsic parameters. If it is, how to calculate it?
Many thanks.
I think you can first use corresponding points x(image points) and X(object points) to determine the projection matrix P:
x = PX
then run interpretation of P to derive interior and exterior matrix:
P = H1 [I|0] H2
where H1 is 3x3 interior matrix and H2 is 4x4 exterior matrix.
Since you already know interior parameters, I think you can use them to adjust H2 to get a proper result.
Anyway, this method is still lack of high accuracy and need further improvement.
I am writing 3D app for OpenGL ES 2.0 where the user sets a path and flies over some terrain. It's basically a flight simulator on rails.
The path is defined by a series of points created from a spline. Every timeslice I advance the current position using interpolation i.e. I interpolate between p0 to p1, then when I reach p1 I interpolate between p1 and p2, then finally back from pN to p0.
I create a view matrix with something analogous to gluLookAt. The eye coord is the current position, the look at is the next position along the path and an up (0, 0, 1). So the camera looks towards where it is flying to next and Z points towards the sky.
But now I want to "bank" as I turn. i.e. the up vector is not necessarily directly straight up but a changes based on the rate of turn. I know my current direction and my last direction so I could increment or decrement the bank by some amount. The dot product would tell me the angle of turn, and the a cross product would tell me if its to the left or right. I could maintain a bank angle and keep it within the range -/+70 degrees, incrementing or decrementing appropriately.
I assume this is the correct approach but I could spend a long time implementing it to find out it isn't.
Am I on the right track and are there samples which demonstrate what I'm attempting to do?
Since you seem to have a nice smooth plane flying in normal conditions you don't need much... You are almost right in your approach and it will look totally natural. All you need is a cross product between 3 sequential points A, B, C: cross = cross(A-B, C-B). Now cross is the vector you need to turn the plane around the "forward" vector: Naturally the plane's up vector is (-gravitation) usually (0,0,1) and forward vector in point B is C-B (if no interpolation is needed) now "side" vector is side = normalized(cross(forward, up)) here is where you use the banking: side = side + cross*planeCorrectionParameter and then up = cross(normalized(side), normalized(forward)). "planeCorrectionParameter" is a parameter you should play with, in reality it would represent some combination of parameters such as dimensions of wings and hull, air density, gravity, speed, mass...
Note that some cross operations above might need swap in parameter order (cross(a,b) should be cross(b,a)) so play around a bit with that.
Your approach sounds correct but it will look unnatural. Take for example a path that looks like a sin function: The plane might be "going" to the left when it's actually going to the right.
I can mention two solutions to your problem. First, you can take the derivative of the spline. I'm assuming your spline is a f(t) function that returns a point (x, y, z). The derivative of a parametric curve is a vector that points to the rotation center: it'll point to the center of a circular path.
A couple of things to note with the above method: the derivative of a straight line is 0, and the vector will also be 0, so you have to fix the up vector manually. Also, you might want to fix this vector so it won't turn upside down.
That works and will look better than your method. But it will still look unnatural for some curves. The best method I can mention is quaternion interpolation, such as Slerp.
At each point of the curve, you also have a "right" vector: the vector that points to the right of the plane. From the curve and this vector, you can calculate the up vector at this point. Then, you use quaternion interpolation to interpolate the up vectors along the curve.
If position and rotation depends only on spline curvature the easiest way will be Numerical differentiation of 3D spline (you will have 2 derivatives one for vertical and one for horizontal components). Your UP and side will be normals to the tangent.
float pts[N][4]={{x1,y1,z1,v1},{x2,y2,z2,v2},...,{xN,yN,zN,vN}};
//in viewsight(0,0)-(w,h);
//N==w*h
//if pts[n][3]==0 then pts[n] is invalid
How to evaluate the normal vectors for each valid pts?
The pts is the points in Point-cloud-data, and being seen in a viewsight sized in (w,h);
something like this:
p11,p12,p13...p1w,
p21,p22,p23...p2w,
...
...
ph1,ph2,ph3...phw,
Each point is assosiated by their neighbors, and generate a surface together to us.
The pts are arranged tightly one by one, rows and columns. And the task of us is to find a way to evaluate the normal vector of each point toward our viewsight as more exactly as possible.
I'm trying to do this in real-time indeed for the pts is generated in real-time. For example counting 1024x1024 pts at a time. Is there a resolution someone have publish before?
Typically, the normal for a vertex on a surface is calculated as the mean of the normal vectors of the adjacent polygons. See: http://www.opengl-redbook.com/appendices/AppH.pdf
In this case, for vertex p55 with the following neighbors:
p44 p45 p46
p54 p55 p56
p64 p65 p66
you could find the normals for each of the triangles,
n1 = (p55 - p44) x (p55 - p45)
n2 = (p55 - p45) x (p55 - p46)
...
making sure to preserve the orientation of the vectors so that all of the normals point in the same direction (toward the viewer). From there you just have to normalize all of the vectors and then take their average.
It is impossible to evaluate the normal vector for a point. A point can not have the normal vector. A plane can.
Why hello thar. I have a 3D model whose starting angles along its 3 axis are known. Those angles are transformed into a Direction Cosine Matrix using this pattern (information found here) :
New angles values are obtained as time goes by, corresponding to an update of the model's orientation. To take those new values into consideration, I update the DCM with those in this way :
newDCM = oldDCM * newanglesDCM
What I want and how I do it : Now the tricky part. I actually only want the Y component of the rotation matrix. In order to apply a motion on my model, I need it to be flatten so the motion vector doesn't go either in the air or down on the ground. To do so I simply pull the 3 angles back from the rotation matrix and create a new one with the angles [0 Y 0].
The problem I have : When rotation are applied to the model, the DCM is updated. As a motion is detected, I multiply the motion vector [0 Yvalue 0] to the DCM that has been flatten (according to the previous explanation). The result is great when the instantaneous rotations have null or close to null values in X and Z components. But as soon as the model is in a situation where X and Z have significant values, the "orientation" of the model's motion is wrong. If I apply rotations that make it return into a "only Y" situation, it starts being good again.
What could go wrong : Either my Direction Cosine Matrix is wrong, or the technique I use to flatten the matrix is just completely stupid.
Thanks for your help, I'd really appreciate if you could give me a hand on this !
Greg.
EDIT : Example as requested
My model has 3 axis X,Y and Z. This defines the XYZ convention when the model is at rest. At starting point t0, I know the angles dAx, dAy and dAz that allow me to rotate the model from its original configuration to the one it is in at t0. If that bugs you, let's say that the model is at rest at t0, it doesn't matter.
I create the DCM just like explained in the image (let it be an identity matrix if it started at rest).
Every now and then rotations are applied to the model. Those rotations also are made of a dAx, dAy and dAz. Thus I update the rotation matrix (DCM) by multiplying the old one by the newly generated one : newDCM = oldDCM * newanglesDCM.
Now let's say I want the model to move, on the grid, from a point to another. Imagine the grid to be a street for example. Whether the model is oriented towards the sky, on a side or in front of it, I want the motion to be the same : alonside the road and not elevating in the air or diving into the ground.
If I kept the rotation matrix as it is, applying a [0 Y 0] rotation WOULD make it go somewhere I don't want it to. Thus I try to find my old original XZ frame, by flattening the DCM. Then I still have the Y component so I know where in the street the model is moving.
Imagine a character, whose head is the model and who is walking outside. If he looks to a building's window and walks, he won't walk in the air right up to the window - he'll walk to the feet of the building. That is exactly what I want to do :D
What you have to do is equate the elements two rotation matrices
E_1 = Rx(dθx)Ry(dθy)Rz(dθz)
and
E_2 = Rx(dφx)Rz(dφz)Ry(dφy)
in an element by element basis. Find two elemetns that contain only sin(dφy) and cos(dφy) and divide them to get tan(dφy)=... in terms of dθx, dθy and dθz.
I tried to do this with the DCM given but I cannot replicate the sequence of rotations to get what you have. My E_1 above is similar but some signs are different. In my example that I did I got the following expressions
dφy=atan( tan(dθy)/cos(dθz) )
dφz=atan( (cos(dθy)*sin(dθz))/(cos(dθy)*cos(dθz)*cos(dφy)+sin(dθy)*sin(dφy)) )
dφx=atan( (cos(dθx)*sin(dθy)*sin(dθz)+sin(dθx)*cos(dθz))/(cos(dθx)*cos(dθz)- ...
You have to work out your own relationships based on the sequences you use.
Note: that once dφy is known the above E_1 = E_2 equality becomes
Rx(dφx)Rz(dφz) = Rx(dθx)Ry(dθy)Rz(dθz)Ry(-dφy)
which is solved for dφz and then you have
Rx(dφx) = Rx(dθx)Ry(dθy)Rz(dθz)Ry(-dφy)Rz(-dφz)
for dφx.
I am trying to implement Inverse Kinematics on a 2D arm(made up of three sticks with joints). I am able to rotate the lowest arm to the desired position. Now, I have some questions:
How can I make the upper arm move alongwith the third so the end point of the arm reaches the desired point. Do I need to use the rotation matrices for both and if yes can someone give me some example or an help and is there any other possibl;e way to do this without rotation matrices???
The lowest arm only moves in one direction. I tried google it, they are saying that cross product of two vectors give the direction for the arm but this is for 3D. I am using 2D and cross product of two 2D vectors give a scalar. So, how can I determine its direction???
Plz guys any help would be appreciated....
Thanks in advance
Vikram
I'll give it a shot, but since my Robotics are two decades in the past, take it with a grain of salt.
The way I learned it, every joint was described by its own rotation matrix, defined relative to its current position and orientation. The coordinate of the whole arm's endpoint was then calculated by combining the rotation matrices together.
This achieved exactly the effect you are looking for: you could move only one joint (change its orientation), and all the other joints followed automatically.
You won't have much chance in getting around matrices here - in fact, if you use homogeneous coordinates, all joint calculations (rotations as well as translations) can be modeled with matrix multiplications. The advantage is that the full arm position can then be described with a single matrix (plus the arm's origin).
With this transformation matrix, you can tackle the inverse kinematic problem: since the transformation matrix' elements will depend on the angles of the joints, you can treat the whole calculation 'endpoint = startpoint x transformation' as a system of equations, and with startpoint and endpoint known, you can solve this system to determine the unknown angles. The difficulty herein lies that the equation may not be solvable, or that there are multiple solutions.
I don't quite understand your second question, though - what are you looking for?
Instead of a rotation matrix, the rotation can be represented by its angle or by a complex number of the unit circle, but it's the same thing really. More importantly, you need a representation T of rigid body transformations, so that you can write stuff like t1 * t2 * t3 to compute the position and orientation of the third link.
Use atan2 to compute the angle between the vectors.
As the following Python example shows, those two things are enough to build a small IK solver.
from gameobjects.vector2 import Vector2 as V
from matrix33 import Matrix33 as T
from math import sin, cos, atan2, pi
import random
The gameobjects library does not have 2D transformations, so you have to write matrix33 yourself. Its interface is just like gameobjects.matrix44.
Define the forward kinematics function for the transformation from one joint to the next. We assume the joint rotates by angle and is followed by a fixed transformation joint:
def fk_joint(joint, angle): return T.rotation(angle) * joint
The transformation of the tool is tool == fk(joints, q) where joints are the fixed transformations and q are the joint angles:
def fk(joints, q):
prev = T.identity()
for i, joint in enumerate(joints):
prev = prev * fk_joint(joint, q[i])
return prev
If the base of the arm has an offset, replace the T.identity() transformation.
The OP is solving the IK problem for position by cyclic coordinate descent. The idea is to move the tool closer to the goal position by adjusting one joint variable at a time. Let q be the angle of a joint and prev be the transformation of the base of the joint. The joint should be rotated by the angle between the vectors to the tool and goal positions:
def ccd_step(q, prev, tool, goal):
a = tool.get_position() - prev.get_position()
b = goal - prev.get_position()
return q + atan2(b.get_y(), b.get_x()) - atan2(a.get_y(), a.get_x())
Traverse the joints and update the tool configuration for every change of a joint value:
def ccd_sweep(joints, tool, q, goal):
prev = T.identity()
for i, joint in enumerate(joints):
next = prev * fk_joint(joint, q[i])
q[i] = ccd_step(q[i], prev, tool, goal)
prev = prev * fk_joint(joint, q[i])
tool = prev * next.get_inverse() * tool
return prev
Note that fk() and ccd_sweep() are the same for 3D; you just have to rewrite fk_joint() and ccd_step().
Construct an arm with n identical links and run cnt iterations of the CCD sweep, starting from a random arm configuration q:
def ccd_demo(n, cnt):
q = [random.uniform(-pi, pi) for i in range(n)]
joints = [T.translation(0, 1)] * n
tool = fk(joints, q)
goal = V(0.9, 0.75) # Some arbitrary goal.
print "i Error"
for i in range(cnt):
tool = ccd_sweep(joints, tool, q, goal)
error = (tool.get_position() - goal).get_length()
print "%d %e" % (i, error)
We can try out the solver and compare the rate of convergence for different numbers of links:
>>> ccd_demo(3, 7)
i Error
0 1.671521e-03
1 8.849190e-05
2 4.704854e-06
3 2.500868e-07
4 1.329354e-08
5 7.066271e-10
6 3.756145e-11
>>> ccd_demo(20, 7)
i Error
0 1.504538e-01
1 1.189107e-04
2 8.508951e-08
3 6.089372e-11
4 4.485040e-14
5 2.601336e-15
6 2.504777e-15
In robotics we most often use DH parameters for the forward and reverse kinematics. Wikipedia has a nice introduction.
The DH (Denavit-Hartenberg) notation is part of the solution. It helps you collect a succinct set of values that describe the mechanics of your robot such as link length and joint type.
From there it becomes easier to calculate forward kinematics. The first think you have to understand is how to translate a coordinate frame from one place to another coordinate frame. For example, given your robot (or the DH table of it), what is the set of rotations and translations you have to apply to one coordinate frame (the world for example) to know the location of a point (or vector) in the robot's wrist coordinate frame.
As you may already know, homogeneous transform matrices are very useful for such transformations. They are 4x4 matrices that encapsulate rotation and translation. Another very useful property of those matrices is that if you have two coordinate frames linked and defined by some rotation and translation, if you multiply the two matrices together, then you just need to multiply your transformation target by the product of that multiplication.
So the DH table will help you build that matrix.
Inverse kinematics is a bit more complicated though and depends on your application. The complication arises from having multiple solutions for the same problem. The greater the number of DOF, the greater the number of solutions.
Think about your arm. Pinch something solid around you. You can move your arm to several locations in the space and still keep your pinching vector unchanged. Solving the inverse kinematics problem involves deciding which solution to choose as well.