I'm attempting to rotate a 3D, non-unit vector (Vector_3) so that it is coincident with another 3D non-unit vector using the Exact_predicates_exact_constructions_kernel.
I'm creating the rotation matrix mostly refering to this. However creating unit vectors is non-trivial. What is the most appropriate method for performing such a rotation with this kernel?
As soon as you need sqrt, you will only be able to get an approximation. I suggest that you use CGAL::Cartesian_converter to do the operation in a Kernel that support sqrt (like CGAL::Simple_cartesian<double> if no exact predicate is needed) and then convert the result back to EPEC kernel.
Related
I'm learning about inverse kinematics, and am trying to write a human skeleton simulation. I am having trouble deciding how to parameterize the rotation of a ball-and-socket joint.
Two methods that I can think of:
The familiar axis-angle (or Euler angle) way. Can change the characteristics of the joint by changing the order of rotation. Can also just use rotation matrices.
Using two quaternion rotations, one along the axis of the bone, and one to determine the orientation. I think this is more intuitive in terms of simulating the joint.
So which one should I use? As far as I can make out:
The axis-angle method is prone to gimbal-lock, which I can visualize
For the other method it is ambiguous as to which axes should be used when calculating the Jacobian entries - the v vector thing in this equation
(source: https://www.math.ucsd.edu/~sbuss/ResearchWeb/ikmethods/iksurvey.pdf, page 5)
I'm inclined to use the second method as I can get around the problem by using CCD instead of Jacobian pseudo-inverses. But I would just like to know which of these methods is used as standard (axis-angle or quaternions), and if so, what are the particular details I need to take into account if I were to adopt it.
Any advice would be helpful, but preferably professional, and in a non-esoteric language should you be kind enough to spare some code :-]
I am trying to make a PointCloud mapping user with multiple kinects on Processing. I get the user's front and back with 2 kinects on opposite sides and generate both PointClouds.
The trouble is that the PointClouds X/Y/Z are not syncronized, it just puts the two of them on screen and it surely looks messy. There is a way to calculate or make a comparison between them, to translate the second PointCloud to "join" the first? I could translate the position manually, but if I move the sensors it will go off again.
Supposing all the Kinects are stationary, I guess you would have to go in this order:
decide on which Kinect to use as a global reference,
get parameters for a 3D transformation for each of the other Kinects - I'd try to
use PMatrix3D and applyMatrix(), although it may be slow,
apply the transformations on to each of the other Kinects' point clouds and draw
the clouds
I don't (yet) know how to get the transformation parameters for a Procrustes transformation, but assuming they won't change, you'd probably have to set up multiple reference points, maybe by displaying the point clouds from each pair of Kinects and registering the points you know are the same in both point clouds. After getting enough of them, construct a PMatrix3D and apply it inside push/popMatrix.
This is the approach used by this guy: http://www.youtube.com/watch?v=ujUNj1RDL4I
An alternative approach would be to use an Iterative Closest Point algorithm and construct 3D transform from its output. I'd really like an ICP or PCL library for Processing, if anyone knows a good one.
I am a bit confused about that I need to move my basic square .Should i use my translate matrix or just change the object vertexes. Which one is accurate ?.
I use vertex shader
gl_Position = myPMVMatrix * a_vertex;
and also i use VBO
From an accuracy point of view both methods are about equally good.
From a performance point of view, it's about minimizing bottlenecks:
For a single square you are probably not able to measure any differences, but when you think about 1 million squares (or triangles), thinks get a little more complicated:
If all of your triangles change position relative to each other, you are probably better off with changing the vbo, because you can push the data directly to the graphics card's memory, instead of having a million OpenGl calls (which are very slow).
If all your triangles stay at the same position relative to each other (like it is the case in a normal 3d-model) you should just change the transformation matrix. In this case you don't have to push the data again onto the gfx-memory, and you only have one function-call, and you are transfering only a few bytes of data to the gfx-memory.
Depending on your application it may be a good choice to devide your triangles into different categories and update them apropriately.
Don't move objects by changing all of the vertices! What about a complex model with thousands of vertices? Even if it's a simple square, don't evolve such bad practice. That's exactly what transformation matrices are for. You are already using a transformation matrix in your shader code. From the naming I assume it's a premultiplied model-view-projection matrix. So it consists of the model matrix positioning the object in world space (here's where your translation usually should go into), the view matrix positioning the world in eye/camera space (sometimes model and view matrix are combined into a single modelview matrix, like in fixed function GL) and the projection matrix doing any kind of perspective projection and/or transformation to the clipping volume, all three multiplied together as P * V * M. If there are still some questions on these transformation matrices and their use, consult some literature on 3d transformations or just your favourite OpenGL tutorial.
Like many 3d graphical programs, I have a bunch of objects that have their own model coordinates (from -1 to 1 in x, y, and z axis). Then, I have a matrix that takes it from model coordinates to world coordinates (using the location, rotation, and scale of the object being drawn). Finally, I have a second matrix to turn those world coordinates into canonical coordinates that OopenGL ES 2.0 will use to draw to the screen.
So, because one object can contain many vertices, all of which use the same transform into both world space, and canonical coordinates, it's faster to calculate the product of those two matrices once, and put each vertex through the resulting matrix, rather than putting each vertex through both matrices.
But, as far as I can tell, there doesn't seem to be a way in OpenGL ES 2.0 shaders to have it calculate the matrix once, and keep using it until the one of the two matrices used until glUniformMatrix4fv() (or another function to set a uniform) is called. So it seems like the only way to calculate the matrix once would be to do it on the CPU, and then result to the GPU using a uniform. Otherwise, when something like:
gl_Position = uProjection * uMV * aPosition;
it will calculate it over and over again, which seems like it would waste time.
So, which way is usually considered standard? Or is there a different way that I am completely missing? As far as I could tell, the shader used to implement the OpenGL ES 1.1 pipeline in the OpenGL ES 2.0 Programming Guide only used one matrix, so is that used more?
First, the correct OpenGL term for "canonical coordinates" is clip space.
Second, it should be this:
gl_Position = uProjection * (uMV * aPosition);
What you posted does a matrix/matrix multiply followed by a matrix/vector multiply. This version does 2 matrix/vector multiplies. That's a substantial difference.
You're using shader-based hardware; how you handle matrices is up to you. There is nothing that is "considered standard"; you do what you best need to do.
That being said, unless you are doing lighting in model space, you will often need some intermediary between model space and 4D homogeneous clip-space. This is the space you transform the positions and normals into in order to compute the light direction, dot(N, L), and so forth.
Personally, I wouldn't suggest world space for reasons that I explain thoroughly here. But whether it's world space, camera space, or something else, you will generally have some intermediate space that you need positions to be in. At which point, the above code becomes necessary, and thus there is no time wasted.
I'm trying to make a 3D object do a wobble effect, very much like a boss in StarFox 64 did when it teleported (see this video at 5:17 for reference). This seems like either a skewing effect, or perhaps an un-uniform scale that rotated around and was applied without rotating the object itself.
Does anyone have any idea how this might be done, or perhaps does anyone have any links to programs where I can play with the matrices directly to see how this is done?
You can use skew based on roll axis in the Euler angles coordinate system
See Euler angles
http://en.wikipedia.org/wiki/Euler_angles
Euler angles-matrix transformation ("General rotations" part of the article):
http://en.wikipedia.org/wiki/Rotation_matrix
An euler angles-matrix conversion utility in DirectX SDK
http://msdn.microsoft.com/en-us/library/microsoft.windowsmobile.directx.matrix.rotationyawpitchroll%28v=VS.85%29.aspx
And threads about skew matrices
skew matrix algorithm
http://www.quantunet.com/flash8/knowledgebase/actionscript/advanced/matrix/matrix_skew.html