Converting from The vector field corresponds to image coordinates. x,y,z grid into vector field specified per voxel - matlab-guide

I have a 501x501x83 single matrix which contains the vector field of each point. . This vector field is corresponds to image coordinates. x,y,z grid for the vector field in image coordinates. I want to convert them back into vector field specified per voxel, rather than using physical coordinates. While the positive direction is along how voxel indices increase, the values in DVF are in the unit of voxels
How can I do it.
Yours sincerely

Related

Transforming a 3D plane onto a 2D coordinate system

Say I have a set of points from a sensor which are all within a margin of error on a 2D plane somewhere in the 3D space. How would I go on about transforming the coordinates of the points onto a 2d coordinate system, so that for example the convex hulls of the points or the distances between the points don't change?
Assuming you know the equation of the plane (otherwise you can fit it by least-square or other), construct a new coordinate frame as follows:
get the normal vector,
form the cross product with an arbitrary vector having a different direction;
form the cross product of the normal and the second vector,
normalize all three and name the new axis z, x, y.
This creates an orthonormal basis to which you will transform the points. This corresponds to a rigid transform, that preserves all distances. You can drop the z to get the orthogonal projections of the points to the plane.

Plane division, binding data to each segment, creating bumps

Is it possible to divide a plane in to several segments such that each segment resembles a data value, that has a scalar value. Based on which, I can create bumps.
Thanks!

How to estimate affine transformation matrix of a rotated image?

I need to find out rotation angle and translation value using given images (Original and Transformed).
Original Image
Transformed Image
The Centroids for Original Image
The Centroids for Transformed Image
After finding the centroids I have tried to determine rotation angle using the centroid points' coordinates. For example in the first image chimney's coordinates are (256,84) in the original image and (284,81) in the transformed image.
I have made a calculation like
angle = atan(abs(284-256)/abs(81-84))/pi*180;
But I have found different angles for different points.
I want to know how to determine rotation angle and translation value of the transformed image.

Averaging shapes (boundary points) of arbitrary objects

I have few images (contours) of an object. However, I would like to average these shapes and use the averaged shape of the object for further shape analysis.
Example:
In the above image, I have stacked the contour to illustrate my example.
I have implemented the first two steps of the algorithm below:
1) Find centroid of both these object shape
2) Align the centers
3) Interpolate the object shape
Since, I am not representing the shapes using some parametric/analytic equation, how can I get the interpolated object shape (i.e. third step)?
Thanks in advance
If you do not have a parametric form for your shape, you can:
For each shape, create a signed distance field that is positive inside the boundary and negative outside (or vice-versa). This can be based on (e.g.) a distance transform and is evaluated at every pixel.
Compute the average of the signed distance fields
Compute the interpolated shape from the zero-crossing of the averaged field
I think this paper describes a similar method (though probably more sophisticated): "Shape-based interpolation using a chamfer distance" http://rd.springer.com/chapter/10.1007/BFb0033762 , but I don't have journal access at my current location to check.

Order / Sequence of matrix transformations in 2D

I have an image containing:
a set of coordinates that act as orientation markers.
a set of coordinates containing data.
Let's call this image A.
This image is fed to a scanner that returns a copy of the image with certain transformations applied (rotation, scale, translation). Let's call the transformed image B. The transformation values applied are not know to me, of course.
Once I receive the transformed image (B), I can easily track the coordinates of the orientation markers and calculate the angle of rotation, scale (x,y) and translation (x,y).
Now I need to retrieve the data coordinates since I already know the transformed orientation coordinates.
If a data point was at location (10, 10) in image A, where would it be in image B? Given that all three transformations are known.
When I apply a simple matrix transformation the transformed data points I calculate are inaccurate. I tried changing the order of transformations but that seems to have absolutely no effect.
What am I doing wrong? Is it the order/sequence of transformations or something else that I'm missing?
EDIT
Please refer to this question for context.

Resources