find shifted coordinate in skewed square - image

I have square, I know the X,Y coordinate for the (A,B,C,D) each, coordinate for (E,F,G,H) and the position for the circle inside first box (I,J).
so ..
I want to find the coordinates for the same circle inside the second box .. base on all the data have.

You need to find the transform from the first box to the second
B=T*A
so you need to find T which is a 3x3 matrix if this is on the plane
solve the equations as shown on this page http://andrew.gibiansky.com/blog/image-processing/image-morphing/
and he has the program too - you only need three points from the first quadrangle and the corresponding three points in the second quadrangle
private static float[] calculateTransform(Polygon pOriginal, Polygon pFinal){
float a = pFinal.xpoints[0];
float b = pFinal.ypoints[0];
float c = pFinal.xpoints[1];
float d = pFinal.ypoints[1];
float e = pFinal.xpoints[2];
float f = pFinal.ypoints[2];
float A = pOriginal.xpoints[0];
float B = pOriginal.ypoints[0];
float C = pOriginal.xpoints[1];
float D = pOriginal.ypoints[1];
float E = pOriginal.xpoints[2];
float F = pOriginal.ypoints[2];
float x = ((B-D)*(e-c) - (a-c)*(F-D)) / ((B-D)*(E-C) - (A-C)*(F-D));
float y = (a*(E-C) + A*(c-e) - c*E + e*C)/(A*(D-F) + B*(E-C) + C*F - D*E);
float t = c - x*C - y*D;
float z = ((B-D)*(f-d) - (b-d)*(F-D)) / ((B-D)*(E-C) - (A-C)*(F-D));
float w = (b*(E-C) + A*(d-f) - d*E + f*C)/(A*(D-F) + B*(E-C) + C*F - D*E);
float s = d - z*C - w*D;
float[] transform = {x, y, z, w, t, s};
return transform;
}
then apply T to any point on A to get the corresponding point on B
private static float[] applyTransform(float x, float y, float[] transform){
float a = transform[0];
float b = transform[1];
float c = transform[2];
float d = transform[3];
float t = transform[4];
float s = transform[5];
float p = a * x + b * y + t;
float q = c * x + d * y + s;
float[] result = {p, q};
return result;
}

Related

How to generate a helico-spiral by visualizing formula or equation

I tried to visualize the formula of helico-spiral by programming but I failed to get the results I wanted. I don't know if I made an error when converting polar coordinates to Cartesian coordinates.
Here is my code:
float alpha;
float beta;
float A;
for (int i = 0; i < num; i++) {
float theta = i * 0.1 * $PI;
float r = A * exp(1.0 / tan(alpha) * theta);
float x = r * sin(beta) * cos(theta);
float y = r * sin(beta) * sin(theta);
float z = -1.0 * A * cos(beta);
vector pos = set(x, z, 0); // point position
}
I customized alpha, beta and theta and wanted to find the point coordinates on the helix through the radius r.
Start with parametrized spiral:
H = max height
R = max radius
n = number of screws
t = <0,1> input parameter
a = 2.0*M_PI*n*t;
r = R*t;
h = H*(1.0-t);
x = r*cos(a);
y = r*sin(a);
z = h;
Now just as you want to parametrize the spiral by r just compute t form r:
t = r/R;
so:
t = r/R;
h = H*(1.0-t);
a = 2.0*M_PI*n*t;
x = r*cos(a);
y = r*sin(a);
z = h;
so simply do a for loop where r goes from 0 to R with some small step and render lines between the computed points ...
Also the dependence between H,R is:
tan(Beta) = R/H
---------------
R = H*tan(Beta)
H = R/tan(Beta)
Beta = atan(R/H)

Project Tango: Depthmap Transformation from XYZij data

I'm currently trying to filter the depth information using OpenCV. For that reason I need to transform Project Tango's depth information XYZij into a image like depthmap. (Like the output of Microsoft Kinect) Unfortunately the official APIs lacking the ij part of XYZij. That's why I'm trying to project the XYZ part using the camera intrinsics projection, wich is explained in the official C API Dokumentation. My current approach looks like this:
float fx = static_cast<float>(ccIntrinsics.fx);
float fy = static_cast<float>(ccIntrinsics.fy);
float cx = static_cast<float>(ccIntrinsics.cx);
float cy = static_cast<float>(ccIntrinsics.cy);
float k1 = static_cast<float>(ccIntrinsics.distortion[0]);
float k2 = static_cast<float>(ccIntrinsics.distortion[1]);
float k3 = static_cast<float>(ccIntrinsics.distortion[2]);
for (int k = 0; k < xyz_ij->xyz_count; ++k) {
float X = xyz_ij->xyz[k][0];
float Y = xyz_ij->xyz[k][1];
float Z = xyz_ij->xyz[k][2];
float ru = sqrt((pow(X, 2) + pow(Y, 2)) / pow(Z, 2));
float rd = ru + k1 * pow(ru, 3) + k2 * pow(ru, 5) + k3 * pow(ru, 7);
int x = X / Z * fx * rd / ru + cx;
int y = X / Z * fy * rd / ru + cy;
// drawing into OpenCV Mat in red
depth.at<cv::Vec3b>(x, y)[0] = 240;
}
The resulting depthmap can be seen in the lower right corner. But it seems that this calculation result in a linear representation ... Does anyone has already done something similar? Are the XYZ points already correct positioned for this projection?
I have actually found a solution ... Just skipped the distortion calculation like they do in the rgb-depth-sync-example. My code now looks like this:
float fx = static_cast<float>(ccIntrinsics.fx);
float fy = static_cast<float>(ccIntrinsics.fy);
float cx = static_cast<float>(ccIntrinsics.cx);
float cy = static_cast<float>(ccIntrinsics.cy);
int width = static_cast<int>(ccIntrinsics.width);
int height = static_cast<int>(ccIntrinsics.height);
for (int k = 0; k < xyz_ij->xyz_count; ++k) {
float X = xyz_ij->xyz[k * 3][0];
float Y = xyz_ij->xyz[k * 3][1];
float Z = xyz_ij->xyz[k * 3][2];
int x = static_cast<int>(fx * (X / Z) + cx);
int y = static_cast<int>(fy * (Y / Z) + cy);
uint8_t depth_value = UCHAR_MAX - ((Z * 1000) * UCHAR_MAX / 4500);
cv::Point point(y % height, x % width);
line(depth, point, point, cv::Scalar(depth_value, depth_value, depth_value), 4.5);
}
And the working OpenCV result looks like this:

denormalizing depth values

I currently use the following code in my fragment shader to display depth images. The values I get from this are normalized. I read them using readpixels. But I currently need the original values without normalizing. I can take the vertex positions I have and manually multiply with MVMatrix but is there a simpler way to extract it?
if (vIsDepth > 0.5)
{
float z = position_1.z;
float n = 1.0;
float f = 20.0;
float ndcDepth = (2.0 * z - n - f)/(f - n);
float clipDepth = ndcDepth /position_1.w;
float cr = ((clipDepth*0.5)+0.5);
gl_FragColor = vec4(cr,cr,cr,1.0);
}

Check if point is inside a rotated rectangle (with different rectangle origins)

How can I check if a point is inside a rotaded rectangle when the rectangle can have different origins? This is basically what i'm using now:
struct Point
{
float x;
float y;
};
struct Rectangle
{
float x;
float y;
float w;
float h;
float origin;
float rotation; // In degrees
};
bool contains(const Rectangle& rect, const Point& point)
{
float c = std::cos(toRadians(-rect.rotation));
float s = std::sin(toRadians(-rect.rotation));
float x = rect.x;
float y = rect.y;
float w = rect.w;
float h = rect.h;
float rotX = x + c * (point.x - x) - s * (point.y - y);
float rotY = y + s * (point.x - x) + c * (point.y - y);
float lx = x - w / 2.f;
float rx = x + w / 2.f;
float ty = y - h / 2.f;
float by = y + h / 2.f;
return lx <= rotX && rotX <= rx && ty <= rotY && rotY <= by;
}
This code does work when the origin is at the center of the rectangle but not in any other origins (that i have tested). How can i make it so it also works when the origin is for example at the top-left corner of the rectangle?

Why is my cube distorted?

Using a quaternion, if I rotate my cube along an axis by 90 degrees, I get a different front facing cube side, which appears as a straight-on square of a solid color. My cube has different colored sides, so changing the axis it is rotated along gives me these different colors as expected.
When I try to rotate by an arbitrary amount, I get quite the spectacular mess, and I don't know why since I'd expect the quaternion process to work well regardless of the angle:
I am creating a quaternion from 2 vectors using this:
inline QuaternionT<T> QuaternionT<T>::CreateFromVectors(const Vector3<T>& v0, const Vector3<T>& v1)
{
if (v0 == -v1)
return QuaternionT<T>::CreateFromAxisAngle(vec3(1, 0, 0), Pi);
Vector3<T> c = v0.Cross(v1);
T d = v0.Dot(v1);
T s = std::sqrt((1 + d) * 2);
QuaternionT<T> q;
q.x = c.x / s;
q.y = c.y / s;
q.z = c.z / s;
q.w = s / 2.0f;
return q;
}
I think the above method is fine since I've seen plenty of sample code correctly using it.
With the above method, I do this:
Quaternion quat1=Quaternion::CreateFromVectors(vec3(0,1,0), vec3(0,0,1));
It works, and it is a 90-degree rotation.
But suppose I want more like a 45-degree rotation?
Quaternion quat1=Quaternion::CreateFromVectors(vec3(0,1,0), vec3(0,1,1));
This gives me the mess above. I also tried normalizing quat1 which provides different though similarly distorted results.
I am using the quaternion as a Modelview rotation matrix, using this:
inline Matrix3<T> QuaternionT<T>::ToMatrix() const
{
const T s = 2;
T xs, ys, zs;
T wx, wy, wz;
T xx, xy, xz;
T yy, yz, zz;
xs = x * s; ys = y * s; zs = z * s;
wx = w * xs; wy = w * ys; wz = w * zs;
xx = x * xs; xy = x * ys; xz = x * zs;
yy = y * ys; yz = y * zs; zz = z * zs;
Matrix3<T> m;
m.x.x = 1 - (yy + zz); m.y.x = xy - wz; m.z.x = xz + wy;
m.x.y = xy + wz; m.y.y = 1 - (xx + zz); m.z.y = yz - wx;
m.x.z = xz - wy; m.y.z = yz + wx; m.z.z = 1 - (xx + yy);
return m;
}
Any idea what's going on here?
What does your frustum look like? If you have a distorted "lens" such as an exceptionally wide-angle field of view, then angles that actually show the depth, such as an arbitrary rotation, might not look as you expect. (Just like how a fisheye lens on a camera makes perspective look unrealistic).
Make sure you are using a realistic frustum if you want to see realistic images.

Resources