I'm currently trying to filter the depth information using OpenCV. For that reason I need to transform Project Tango's depth information XYZij into a image like depthmap. (Like the output of Microsoft Kinect) Unfortunately the official APIs lacking the ij part of XYZij. That's why I'm trying to project the XYZ part using the camera intrinsics projection, wich is explained in the official C API Dokumentation. My current approach looks like this:
float fx = static_cast<float>(ccIntrinsics.fx);
float fy = static_cast<float>(ccIntrinsics.fy);
float cx = static_cast<float>(ccIntrinsics.cx);
float cy = static_cast<float>(ccIntrinsics.cy);
float k1 = static_cast<float>(ccIntrinsics.distortion[0]);
float k2 = static_cast<float>(ccIntrinsics.distortion[1]);
float k3 = static_cast<float>(ccIntrinsics.distortion[2]);
for (int k = 0; k < xyz_ij->xyz_count; ++k) {
float X = xyz_ij->xyz[k][0];
float Y = xyz_ij->xyz[k][1];
float Z = xyz_ij->xyz[k][2];
float ru = sqrt((pow(X, 2) + pow(Y, 2)) / pow(Z, 2));
float rd = ru + k1 * pow(ru, 3) + k2 * pow(ru, 5) + k3 * pow(ru, 7);
int x = X / Z * fx * rd / ru + cx;
int y = X / Z * fy * rd / ru + cy;
// drawing into OpenCV Mat in red
depth.at<cv::Vec3b>(x, y)[0] = 240;
}
The resulting depthmap can be seen in the lower right corner. But it seems that this calculation result in a linear representation ... Does anyone has already done something similar? Are the XYZ points already correct positioned for this projection?
I have actually found a solution ... Just skipped the distortion calculation like they do in the rgb-depth-sync-example. My code now looks like this:
float fx = static_cast<float>(ccIntrinsics.fx);
float fy = static_cast<float>(ccIntrinsics.fy);
float cx = static_cast<float>(ccIntrinsics.cx);
float cy = static_cast<float>(ccIntrinsics.cy);
int width = static_cast<int>(ccIntrinsics.width);
int height = static_cast<int>(ccIntrinsics.height);
for (int k = 0; k < xyz_ij->xyz_count; ++k) {
float X = xyz_ij->xyz[k * 3][0];
float Y = xyz_ij->xyz[k * 3][1];
float Z = xyz_ij->xyz[k * 3][2];
int x = static_cast<int>(fx * (X / Z) + cx);
int y = static_cast<int>(fy * (Y / Z) + cy);
uint8_t depth_value = UCHAR_MAX - ((Z * 1000) * UCHAR_MAX / 4500);
cv::Point point(y % height, x % width);
line(depth, point, point, cv::Scalar(depth_value, depth_value, depth_value), 4.5);
}
And the working OpenCV result looks like this:
Related
I tried to visualize the formula of helico-spiral by programming but I failed to get the results I wanted. I don't know if I made an error when converting polar coordinates to Cartesian coordinates.
Here is my code:
float alpha;
float beta;
float A;
for (int i = 0; i < num; i++) {
float theta = i * 0.1 * $PI;
float r = A * exp(1.0 / tan(alpha) * theta);
float x = r * sin(beta) * cos(theta);
float y = r * sin(beta) * sin(theta);
float z = -1.0 * A * cos(beta);
vector pos = set(x, z, 0); // point position
}
I customized alpha, beta and theta and wanted to find the point coordinates on the helix through the radius r.
Start with parametrized spiral:
H = max height
R = max radius
n = number of screws
t = <0,1> input parameter
a = 2.0*M_PI*n*t;
r = R*t;
h = H*(1.0-t);
x = r*cos(a);
y = r*sin(a);
z = h;
Now just as you want to parametrize the spiral by r just compute t form r:
t = r/R;
so:
t = r/R;
h = H*(1.0-t);
a = 2.0*M_PI*n*t;
x = r*cos(a);
y = r*sin(a);
z = h;
so simply do a for loop where r goes from 0 to R with some small step and render lines between the computed points ...
Also the dependence between H,R is:
tan(Beta) = R/H
---------------
R = H*tan(Beta)
H = R/tan(Beta)
Beta = atan(R/H)
I found some useful coordinate conversion code at https://gist.github.com/govert/1b373696c9a27ff4c72a
However, there is a bit specifically in the EcefToEnu function that I'm not clear on
// Converts the Earth-Centered Earth-Fixed (ECEF) coordinates (x, y, z) to
// East-North-Up coordinates in a Local Tangent Plane that is centered at the
// (WGS-84) Geodetic point (lat0, lon0, h0).
public static void EcefToEnu(double x, double y, double z,
double lat0, double lon0, double h0,
out double xEast, out double yNorth, out double zUp)
{
// Convert to radians in notation consistent with the paper:
var lambda = DegreesToRadians(lat0);
var phi = DegreesToRadians(lon0);
var s = Sin(lambda);
var N = a / Sqrt(1 - e_sq * s * s);
var sin_lambda = Sin(lambda);
var cos_lambda = Cos(lambda);
var cos_phi = Cos(phi);
var sin_phi = Sin(phi);
double x0 = (h0 + N) * cos_lambda * cos_phi;
double y0 = (h0 + N) * cos_lambda * sin_phi;
double z0 = (h0 + (1 - e_sq) * N) * sin_lambda;
double xd, yd, zd;
xd = x - x0;
yd = y - y0;
zd = z - z0;
// This is the matrix multiplication
xEast = -sin_phi * xd + cos_phi * yd;
yNorth = -cos_phi * sin_lambda * xd - sin_lambda * sin_phi * yd + cos_lambda * zd;
zUp = cos_lambda * cos_phi * xd + cos_lambda * sin_phi * yd + sin_lambda * zd;
}
I get the inputs, the first 4 conversion lines, the 4 sin and cos lines and I get the matrix multiplication - there are numerous examples with that in the algorithm. But what I'm not clear on is the part
double x0 = (h0 + N) * cos_lambda * cos_phi;
double y0 = (h0 + N) * cos_lambda * sin_phi;
double z0 = (h0 + (1 - e_sq) * N) * sin_lambda;
double xd, yd, zd;
xd = x - x0;
yd = y - y0;
zd = z - z0;
I don't recognize this section from any of the algorithms I've seen. It appears to be some sort of offset, but aside from that, I'm unclear where the formulas came from or what exactly this code is doing. Can someone please enlighten me as to what this bit of code is doing? I just want to understand what I'm looking at.
They are the conversion from geodetic coordinates (lat,long,height) aka (phi,lambda,h0) to ecef cartesians (x0,y0,z0) and then the computation of the ecef vector from (x0,y0,n0) to (x,y,z).
For the first part, note that if the ellipsoid were a sphere (e==0) then the first part would be the conversion from spherical polars to cartesians
I have square, I know the X,Y coordinate for the (A,B,C,D) each, coordinate for (E,F,G,H) and the position for the circle inside first box (I,J).
so ..
I want to find the coordinates for the same circle inside the second box .. base on all the data have.
You need to find the transform from the first box to the second
B=T*A
so you need to find T which is a 3x3 matrix if this is on the plane
solve the equations as shown on this page http://andrew.gibiansky.com/blog/image-processing/image-morphing/
and he has the program too - you only need three points from the first quadrangle and the corresponding three points in the second quadrangle
private static float[] calculateTransform(Polygon pOriginal, Polygon pFinal){
float a = pFinal.xpoints[0];
float b = pFinal.ypoints[0];
float c = pFinal.xpoints[1];
float d = pFinal.ypoints[1];
float e = pFinal.xpoints[2];
float f = pFinal.ypoints[2];
float A = pOriginal.xpoints[0];
float B = pOriginal.ypoints[0];
float C = pOriginal.xpoints[1];
float D = pOriginal.ypoints[1];
float E = pOriginal.xpoints[2];
float F = pOriginal.ypoints[2];
float x = ((B-D)*(e-c) - (a-c)*(F-D)) / ((B-D)*(E-C) - (A-C)*(F-D));
float y = (a*(E-C) + A*(c-e) - c*E + e*C)/(A*(D-F) + B*(E-C) + C*F - D*E);
float t = c - x*C - y*D;
float z = ((B-D)*(f-d) - (b-d)*(F-D)) / ((B-D)*(E-C) - (A-C)*(F-D));
float w = (b*(E-C) + A*(d-f) - d*E + f*C)/(A*(D-F) + B*(E-C) + C*F - D*E);
float s = d - z*C - w*D;
float[] transform = {x, y, z, w, t, s};
return transform;
}
then apply T to any point on A to get the corresponding point on B
private static float[] applyTransform(float x, float y, float[] transform){
float a = transform[0];
float b = transform[1];
float c = transform[2];
float d = transform[3];
float t = transform[4];
float s = transform[5];
float p = a * x + b * y + t;
float q = c * x + d * y + s;
float[] result = {p, q};
return result;
}
Using a quaternion, if I rotate my cube along an axis by 90 degrees, I get a different front facing cube side, which appears as a straight-on square of a solid color. My cube has different colored sides, so changing the axis it is rotated along gives me these different colors as expected.
When I try to rotate by an arbitrary amount, I get quite the spectacular mess, and I don't know why since I'd expect the quaternion process to work well regardless of the angle:
I am creating a quaternion from 2 vectors using this:
inline QuaternionT<T> QuaternionT<T>::CreateFromVectors(const Vector3<T>& v0, const Vector3<T>& v1)
{
if (v0 == -v1)
return QuaternionT<T>::CreateFromAxisAngle(vec3(1, 0, 0), Pi);
Vector3<T> c = v0.Cross(v1);
T d = v0.Dot(v1);
T s = std::sqrt((1 + d) * 2);
QuaternionT<T> q;
q.x = c.x / s;
q.y = c.y / s;
q.z = c.z / s;
q.w = s / 2.0f;
return q;
}
I think the above method is fine since I've seen plenty of sample code correctly using it.
With the above method, I do this:
Quaternion quat1=Quaternion::CreateFromVectors(vec3(0,1,0), vec3(0,0,1));
It works, and it is a 90-degree rotation.
But suppose I want more like a 45-degree rotation?
Quaternion quat1=Quaternion::CreateFromVectors(vec3(0,1,0), vec3(0,1,1));
This gives me the mess above. I also tried normalizing quat1 which provides different though similarly distorted results.
I am using the quaternion as a Modelview rotation matrix, using this:
inline Matrix3<T> QuaternionT<T>::ToMatrix() const
{
const T s = 2;
T xs, ys, zs;
T wx, wy, wz;
T xx, xy, xz;
T yy, yz, zz;
xs = x * s; ys = y * s; zs = z * s;
wx = w * xs; wy = w * ys; wz = w * zs;
xx = x * xs; xy = x * ys; xz = x * zs;
yy = y * ys; yz = y * zs; zz = z * zs;
Matrix3<T> m;
m.x.x = 1 - (yy + zz); m.y.x = xy - wz; m.z.x = xz + wy;
m.x.y = xy + wz; m.y.y = 1 - (xx + zz); m.z.y = yz - wx;
m.x.z = xz - wy; m.y.z = yz + wx; m.z.z = 1 - (xx + yy);
return m;
}
Any idea what's going on here?
What does your frustum look like? If you have a distorted "lens" such as an exceptionally wide-angle field of view, then angles that actually show the depth, such as an arbitrary rotation, might not look as you expect. (Just like how a fisheye lens on a camera makes perspective look unrealistic).
Make sure you are using a realistic frustum if you want to see realistic images.
Does anyone have an algorithm for drawing an arrow in the middle of a given line. I have searched for google but haven't found any good implementation.
P.S. I really don't mind the language, but it would be great if it was Java, since it is the language I am using for this.
Thanks in advance.
Here's a function to draw an arrow with its head at a point p. You would set this to the midpoint of your line. dx and dy are the line direction, which is given by (x1 - x0, y1 - y0). This will give an arrow that is scaled to the line length. Normalize this direction if you want the arrow to always be the same size.
private static void DrawArrow(Graphics g, Pen pen, Point p, float dx, float dy)
{
const double cos = 0.866;
const double sin = 0.500;
PointF end1 = new PointF(
(float)(p.X + (dx * cos + dy * -sin)),
(float)(p.Y + (dx * sin + dy * cos)));
PointF end2 = new PointF(
(float)(p.X + (dx * cos + dy * sin)),
(float)(p.Y + (dx * -sin + dy * cos)));
g.DrawLine(pen, p, end1);
g.DrawLine(pen, p, end2);
}
Here's a method to add an arrow head to a line.
You just have to give it the coordinates of your arrow tip and tail.
private static void drawArrow(int tipX, int tailX, int tipY, int tailY, Graphics2D g)
{
int arrowLength = 7; //can be adjusted
int dx = tipX - tailX;
int dy = tipY - tailY;
double theta = Math.atan2(dy, dx);
double rad = Math.toRadians(35); //35 angle, can be adjusted
double x = tipX - arrowLength * Math.cos(theta + rad);
double y = tipY - arrowLength * Math.sin(theta + rad);
double phi2 = Math.toRadians(-35);//-35 angle, can be adjusted
double x2 = tipX - arrowLength * Math.cos(theta + phi2);
double y2 = tipY - arrowLength * Math.sin(theta + phi2);
int[] arrowYs = new int[3];
arrowYs[0] = tipY;
arrowYs[1] = (int) y;
arrowYs[2] = (int) y2;
int[] arrowXs = new int[3];
arrowXs[0] = tipX;
arrowXs[1] = (int) x;
arrowXs[2] = (int) x2;
g.fillPolygon(arrowXs, arrowYs, 3);
}