I have an animated object converted from Euler XYZ rotation controller to linear rotation controller.
The animation is fine everything is looking good. But there are no keys in curve editor only in dope sheet. The whole rotation channel is missing.
Is this working like intended. I didn't find the explanation in 3ds Max documentation.
The linear controller for rotation is different from let's say Linear Float in that it stores quaternion values and they can't be displayed as a curve. If you want to modify the key values from within curve editor, use Euler XYZ instead and assign Linear Float controllers to each of the axes. That way, you will have linear motion interpolation inbetween keys and also individual curves to work with.
Related
I'm working on a real-time ballistics simulation of many particles under the effect of highly non-uniform wind. The wind data is obtained from CFD in a form of 2D discretized vector field (unstructured mesh, each grid point has associated with it a vector which tells the direction and magnitude of air velocity).
The problem is that I need to be able to extract the wind vector at any position that a particle occupies, so that aerodynamic drag can be computed and injected into ballistics physics. This is trivial if the wind data can be approximated by an analytical/numerical vector field where a vector can be computed with an algebraic expression. However, the wind data I'm working with is quite complex and there doesn't seem to be any way to approximate it.
I have two ideas:
Find a way to interpolate the vector field every time each particle's position is updated. This sounds computationally expensive, so I'm not sure if it can be done real-time. Also, the mesh is unstructured, and I'm not sure if 2D interpolation can be done with this kind of mesh.
Just pick the grid point closest to the particle's position and get the vector from there (given that the mesh is fine enough for this to accurately represent the actual vector field). This will then turn into a real-time nearest-neighbor problem with rapid and numerous queries.
I'm not sure if these are the only two solutions for this problem, and if these can be done in real-time at all. How should I go about solving this?
With a camera inside a cylinder I capture a image. I want to transform that image into a plane 2d. The image inside the cylinder have a lot of dots which forms a grid.
What I tried to do was estimating the transformation. With blob analysis I can detect the center of each point and obtain the coordinates in pixels. I save this in matrix called ImCilynder. After that i create a matrix with coordinates of that points in the plane with the name Im2d.
I calculate the transformation (H) solving the equation:
Imcilynder * H= Im2d;
H= matrix [9x1]
H=pinv(Imcilynder) * Im2d
But, when i'm doing the test with the same points, the result is completely random, so i'm doing something wrong.
Is there a better way to solve this? Can you help me?
Explaining better,
I'm trying to find the transformation which transforms the image above to this image:
So, to clarify, I want the projection of the points which i see in the first image to a plane. Basically i want o unwrap the cylinder.
After the calculation of the transformation matrix. I'm expecting to multiply the first image with the transformation matrix and obtain the points in the plane. Or to multiply the coordinates of the center of the black dots and obtain the coordinates of that dots in the plane. Is this possibly?
Thank you very much,
Afonso
Well, what do yo wish to have in a plane? the circles forming a grid? Because if this is the case you need to remove the radial distortion, these kind of models are represented by some parameters, are non-linear by the way. May be if you can find a very good algorithm, you are going to obtain something like this:
If this is not your idea, you need to apply an elastic transformation and this kind of transformation needs to use a kind of grid that is the model of the transformation and you need to propose your model of grid. If you want to do this automatically you need to resort to elastic registration algorithms and you can use a model like this one:
Any ways, this is not a trivial task, there are a lot of research about complex transformations of course if you want to automatically obtain the transformation. Otherwise you can use photoshop ;).
I am working on a Perspective camera. The constructor must be:
PerspectiveCamera::PerspectiveCamera(Vec3f ¢er, Vec3f &direction, Vec3f &up, float angle)
This is construction different from most others, as it lacks near and far clipping planes. I know what to with center, direction, and up -- the standard look at algorithm.
We can construct the view matrix and translate matrix accordingly:
Thus, the viewing transformation is:
For an orthographic camera (which is working correctly for me), the inverse transformation is used to go from screen space to world space. The camera coordinates go from (-1,-1,0) --> (1,1,0) in screen space.
For perspective transformation, only the field of view is given. The Wikipedia 3D projection article gives a perspective projection matrix using the field of view angle and assuming camera coordinates go from (-1,-1) --> (1,1):
In my code, (ex,ey,ez) are the camera coordinates that go from (-1,-1, ez) --> (1,1, ez). Note that the 1 in (3,3) spot of K isn't in the Wikipedia article -- I put it in to make the matrix invertible. So that may be a problem.
But anyways, for perspective projection, I used this transformation:
K inverse is multiplied with p to make the canonical view volume to a view frustum, and the result of that is multiplied with M inverse to move into world coordinates.
I get the wrong results. The correct output is:
My output looks like this:
Am I using the right algorithm for perspective projection given my constraint (no near and far plane inputs)???
Just in case somebody else runs into this issue, the method presented in the question is not proper way to create a viewing frustum. The perspective matrix (K) is for projecting the far plane onto the near plane, and we don't have those planes in this case
To create a frustum, do the inverse transformation on (x,y,ez) [as opposed to (x,y,0) for orthographic projection). Find a new direction by subtracting the transformed point from the center of projection. Shoot the ray.
I'm creating a 3D globe with a map on it which is supposed to unravel and fill the screen after a few seconds.
I've managed to create the globe using three.js and webGL, but I'm having trouble finding any information on being able to animate a shape change. Can anyone provide any help? Is it even possible?
(Abstract Algorithm's and Kevin Reid's answers are good, and only one thing is missing: some actual Three.js code.)
You basically need to calculate where each point of the original sphere will be mapped to after it flattens out into a plane. This data is an attribute of the shader: a piece of data attached to each vertex that differs from vertex to vertex of the geometry. Then, to animate the transition from the original position to the end position, in your animation loop you will need to update the amount of time that has passed. This data is a uniform of the shader: a piece of data that remains constant for all vertices during each frame of the animation, but may change from one frame to the next. Finally, there exists a convenient function called "mix" that will linearly interpolate between the original position and the end/goal position of each vertex.
I've written two examples for you: the first just "flattens" a sphere, sending the point (x,y,z) to the point (x,0,z).
http://stemkoski.github.io/Three.js/Shader-Attributes.html
The second example follows Abstract Algorithm's suggestion in the comments: "unwrapping the sphere's vertices back on plane surface, like inverse sphere UV mapping." In this example, we can easily calculate the ending position from the UV coordinates, and so we actually don't need attributes in this case.
http://stemkoski.github.io/Three.js/Sphere-Unwrapping.html
Hope this helps!
In 3D, anything and everything is possible. ;)
Your sphere geometry has it's own vertices, and basically you just need to animate their position, so after animation they are all sitting on one planar surface.
Try creating sphere and plane geometry, with same number of vertices, and animating sphere's vertices with interpolated values of sphere's and plane's original values. That way, on the start you would have sphere shape and in the end, plane shape.
Hope this helps, tell me if you need more directives how to do it.
myGlobe.geometry.vertices[index].position = something_calculated;
// myGlobe is instance of THREE.Mesh and something_calculated would be THREE.Vector3 instance that you can calculate in some manner (sphere-plane interpolation over time)
(Abstract Algorithm's answer is good, but I think one thing needs improvement: namely using vertex shaders.)
You make a set of vertices textured with the map image. Then, design a calculation for interpolating between the sphere shape and the flat shape. It doesn't have to be linear interpolation — for example, one way that might be good is to put the map on a small portion of an sphere of increasing radius until it looks flat (getting it all the way will be tricky).
Then, write that calculation in your vertex shader. The position of each vertex can be computed entirely from the texture coordinates (since that determines where-on-the-map the vertex goes and implies its position) and a uniform variable containing a time value.
Using the vertex shader will be much more efficient than recomputing and re-uploading the coordinates using JavaScript, allowing perfectly smooth animation with plenty of spare resources to do other things as well.
Unfortunately, I'm not familiar enough with Three.js to describe how to do this in detail, but all of the above is straightforward in basic WebGL and should be possible in any decent framework.
My problem involves matching a set of 2d points to a set of 3d points, with known correspondence between the two. Basically I have points on an image, and I need the optimal translation and rotation to fit the points to a known 3d point cloud. Kabsch algorithm is originally meant for finding the best fit of 3d points to another point cloud, and there are implementations out there for 2d to 2d, but not something I can use. I do know it's possible, but just don't know how to go about it. I searched for code out there and came up empty. I'm programming in matlab at the moment, but any language would do.
Thank you.
Edit: The goal is getting a rotation and translation of the 3d point cloud to best match the 2d points when it is projected onto an image plane.
I should also mention that the 3d to 2d projection is done using a weak perspective.
So basically, you have a "plane" or a "line" of points, like the third dimension was 0. You could threat them like this, and use the tipicall kabsh algorithm of squared distance minimisation, don't you?
EDIT: maybe it's a nonsense, but what about projecting the 3d body to 2d coordinates, and do a 2d comparison? Computationally is expensive, so it includes exploring all the angles of the 3d object + projection, but it's easier losing one dimension by applying a projection, that adding a new dimenssion to a 2d point.