Image transformation in OpenCV does not match point transformation - algorithm

I am distorting an image using ThinPlateSplineShapeTransformer from OpenCV 3.4.2 in C++. Separately, using the same object, I want to distort 3 points from the initial image in the destination image. To visualize the transformation I use 3 triangles:
green: the triangle drawn on the original image. This one will be
distorted with the image.
blue: the reference triangle (initial coordinates)
red: the triangle after the distorting the points
Original image:
Original image augmented with a triangle for reference:
Distorted image + separate point transformation, red triangle should overlap the green one. Blue is the initial one.
Code:
void transform()
{
Mat img = imread("test.jpg"); // the posted original image
auto tps = cv::createThinPlateSplineShapeTransformer();
std::vector<cv::Point2f> sourcePoints, targetPoints, myPoints;
sourcePoints.push_back(cv::Point2f(0, 0));
targetPoints.push_back(cv::Point2f(100, 0));
sourcePoints.push_back(cv::Point2f(650, 40));
targetPoints.push_back(cv::Point2f(500, 0));
sourcePoints.push_back(cv::Point2f(0, 599));
targetPoints.push_back(cv::Point2f(0, 450));
sourcePoints.push_back(cv::Point2f(799, 599));
targetPoints.push_back(cv::Point2f(600, 599));
std::vector<cv::DMatch> matches;
for (unsigned int i = 0; i < sourcePoints.size(); i++)
matches.push_back(cv::DMatch(i, i, 0));
tps->estimateTransformation(sourcePoints, targetPoints, matches);
std::vector<cv::Point2f> transPoints, transPoints2;
//======== draw test points
myPoints.push_back(Point2f(100, 100));
myPoints.push_back(Point2f(200, 200));
myPoints.push_back(Point2f(100, 400));
line(img, myPoints[0], myPoints[1], Scalar(0, 255, 0), 3);
line(img, myPoints[1], myPoints[2], Scalar(0, 255, 0), 3);
line(img, myPoints[2], myPoints[0], Scalar(0, 255, 0), 3);
//========= warp image
Mat img2 = img.clone();
tps->warpImage(img, img2);
//========= warp points
tps->applyTransformation(myPoints, transPoints);
//tps->applyTransformation(transPoints2, transPoints);
line(img2, transPoints[0], transPoints[1], Scalar(0, 0, 255), 3);
line(img2, transPoints[1], transPoints[2], Scalar(0, 0, 255), 3);
line(img2, transPoints[2], transPoints[0], Scalar(0, 0, 255), 3);
//========== draw reference points
line(img2, myPoints[0], myPoints[1], Scalar(255, 0, 0), 3);
line(img2, myPoints[1], myPoints[2], Scalar(255, 0, 0), 3);
line(img2, myPoints[2], myPoints[0], Scalar(255, 0, 0), 3);
imshow("img", img);
imshow("img2", img2);
get_test_contur();
waitKey(0);
}
I don't understand why the result (red triangle) doesn't overlap the green triangle. What am I missing?

Related

DirectX 9 - drawing a 2D sprite in its exact dimensions

I'm trying to build a simple 2D game using DirectX9, and I want to be able to use sprite dimensions and coordinates with no scaling applied.
The book that I'm following ("Introduction to 3D Game Programming with DirectX 9.0c" by Frank Luna) shows a trick using Direct3D's sprite functions to render graphics in 2D, but the book code still sets up a camera using D3DXMatrixLookAtLH and D3DXMatrixPerspectiveFovLH, and the sprite images get scaled in perspective. How do I set up the view and projection to where sprites are rendered in original dimensions and X-Y coordinates can be addressed as an actual pixel location within the window?
UPDATE
Although this might not be the ideal solution, I did come up with a workaround. I realized if I set the projection matrix with 90-degree field-of-view and the near plane at z=0, then all I have to do is to look at the origin (0, 0, 0) with the D3DXMatrixLookAtRH and step back by half of the screen width (the height of an Isosceles Right Triangle is half of the base).
So for my client area being 400 x 400, the following settings worked for me:
// get client rect
RECT R;
GetClientRect(hWnd, &R);
float width = (float)R.right;
float height = (float)R.bottom;
// step back by 400/2=200 and look at the origin
D3DXMATRIX V;
D3DXVECTOR3 pos(0.0f, 0.0f, (-width*0.5f) / (width/height)); // see "UPDATE 2" below
D3DXVECTOR3 up(0.0f, 1.0f, 0.0f);
D3DXVECTOR3 target(0.0f, 0.0f, 0.0f);
D3DXMatrixLookAtLH(&V, &pos, &target, &up);
d3dDevice->SetTransform(D3DTS_VIEW, &V);
// PI x 0.5 -> 90 degrees, set the near plane to z=0
D3DXMATRIX P;
D3DXMatrixPerspectiveFovLH(&P, D3DX_PI * 0.5f, width/height, 0.0f, 5000.0f);
d3dDevice->SetTransform(D3DTS_PROJECTION, &P);
Turning off all the texturing filters (or setting to D3DTEXF_POINT) seems to get the best pixel-accurate feel.
Another important thing to note was that CreateWindowEx() with requested 400 x 400 size returned a client area of something like 387 x 362, so I had to check with GetClientRect(), calculate the difference and readjust the window size using SetWindowPos() after initial creation.
The screenshot below shows the result of taking the steps mentioned above. The original bitmap (right) is rendered with no scaling/stretching applied in the app (left)... finally!
UPDATE 2
I didn't test the above method for when the aspect ratio isn't 1:1. I adjusted the code - the amount you step back for your camera position should be ... window_width * 0.5 / aspect_ratio (or width/height).
DirectX Tool Kit SpriteBatch class is designed to do exactly what you describe. When drawing with Direct3D, screen coordinates are (-1,-1) to (1,1) with (-1,-1) in the upper-right corner.
This sets up the matrix that will let you specify in screen-coordinates with (0,0) in the upper-right.
// Compute the matrix.
float xScale = (mViewPort.Width > 0) ? 2.0f / mViewPort.Width : 0.0f;
float yScale = (mViewPort.Height > 0) ? 2.0f / mViewPort.Height : 0.0f;
switch( rotation )
{
case DXGI_MODE_ROTATION_ROTATE90:
return XMMATRIX
(
0, -yScale, 0, 0,
-xScale, 0, 0, 0,
0, 0, 1, 0,
1, 1, 0, 1
);
case DXGI_MODE_ROTATION_ROTATE270:
return XMMATRIX
(
0, yScale, 0, 0,
xScale, 0, 0, 0,
0, 0, 1, 0,
-1, -1, 0, 1
);
case DXGI_MODE_ROTATION_ROTATE180:
return XMMATRIX
(
-xScale, 0, 0, 0,
0, yScale, 0, 0,
0, 0, 1, 0,
1, -1, 0, 1
);
default:
return XMMATRIX
(
xScale, 0, 0, 0,
0, -yScale, 0, 0,
0, 0, 1, 0,
-1, 1, 0, 1
);
}
In Direct3D 9 the pixel centers were defined a little differently than Direct3D 10/11/12 so the typical solution in the legacy API was to add a 0.5,0.5 half-center offset to all the positions. You don't need to do this with Direct3D 10/11/12.

Rotating a Square with a circle inside, having dots on top of the circle

Okay guys Here is the scenario, in this picture there is a box and inside of that box there is a circle. Now as you can see there is four points on top of the circle and four corners of the box. These dots are actually ellipse. By means of the dots we can reshape the images. Now i what i want to do is to add rotation around its center i.e. rotation for both circle and the box. The problem being for the rotation is that the dots are on top of the circle and while rotating their position needs to be maintained and also the other corners point. Any inputs how this can be done?
You can use rotate() to apply a transformation to the coordinates matrix.
like:
void setup() {
size(300, 300);
rectMode(CENTER);
ellipseMode(CENTER);
}
void draw() {
background(255);
//using frame count to rotate
float a = radians(frameCount%360);
// move coordinates so you can draw at origin
// rotates always use origin as axis
translate(width/2, height/2);
//clockWise
rotate(a);
//counterClockWise
// rotate(-a);
noFill();
rect(0, 0, 100, 100);
ellipse(0, 0, 100, 100);
ellipse(-50, 0, 4, 4);
ellipse(0, -50, 4, 4);
ellipse(-50, -50, 4, 4);
ellipse(0, 50, 4, 4);
ellipse(50, 0, 4, 4);
ellipse(-50, 50, 4, 4);
ellipse(50, 50, 4, 4);
fill(255, 0, 0);
ellipse(50, -50, 4, 4);
}
There is this amazing tutorial on 2D transformations:
http://processing.org/tutorials/transform2d/

Transform matrices with Matrix.CreatePerspectiveOffCenter in XNA: vanishing point to center

I'm trying to get the following perspective of view:
In essence I'm doing a 2D game with some 3D graphics so I switched from Matrix.CreateOrthographicOffCenter to Matrix.CreatePerspectiveOffCenter
I have drawn a primitive and by decreasing it's z-index it goes further away, but it always vanishes in direction of the (0, 0) (top-left), while the vanishing point should be the center.
My transform settings now look like this ((640, 360) is the center of the screen):
basicEffect.Projection = Matrix.CreatePerspectiveOffCenter(0, graphicsDevice.Viewport.Width, graphicsDevice.Viewport.Height, 0, 1, 10);
basicEffect.View = Matrix.Identity * Matrix.CreateLookAt(new Vector3(640, 360, 1), new Vector3(640, 360, 0), new Vector3(0, 1, 0));
basicEffect.World = Matrix.CreateTranslation(0, 0, 0);
I can't get the vanishing point to the center of the screen. I managed to (sort of) do it with CreatePerspective view but I want to keep using CreatePerspectiveOffCenter because I can translate normal pixel positions easily to the 3D space. What am I missing?
In the end I used the following. If you're looking for a solution to create a 3d view with a '2D feel' this might come in handy. With these translations a z-index of 0 exactly matches the screen's width and height and the vanishing point is in the center of the screen.
basicEffect.Projection = Matrix.CreatePerspectiveFieldOfView((float)Math.PI / 2f, 1, 1f / 1000, 1000f);
basicEffect.View = Matrix.CreateLookAt(new Vector3(0, 0, 1f), new Vector3(0, 0, 0), new Vector3(0, 1, 0));
basicEffect.World = Matrix.CreateTranslation(0, 0, 0);

Is there a way to transform the rectagle drawn by glDrawTex?

As the OpenGL spec states all transformations are ignored by design, but is there an easy way to draw a texture as glDrawTex does, but transforming the pixels with a matrix before?
Cann't you simply modify x/y/z arguments for glDrawTex function to transform your texture in position you want?
But if you want to rotate texture, then simply draw textured quad using two triangles. It's very simple. Assuming you have OpenGL ES version 1.1:
const float v[] = {
0, 0, 0, 0,
0, 128, 0, 1,
128, 0, 1, 0,
128, 128, 1, 1,
};
glBindTexture(GL_TEXTURE_2D, texId);
glEnable(GL_TEXTURE_2D);
glTexCoordPointer(2, GL_FLOAT, 4*4, &v[2]);
glVertexPointer(2, GL_FLOAT, 4*4, &v[0]);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnableClientState(GL_VERTEX_ARRAY);
glDrawArrays( GL_TRIANGLE_STRIP, 0, 4);
(I'm assuming you are drawing using orthographic projection, and 128 is size of texture)
This way texture position can be modified using modelview matrix. Also texure matrix can be used to modify how texture is applied to triangles.

How to draw circle in opengles

Here is my part of code to show circle on screen but unfortunate circle is not coming on screen.
glClearColor(0, 0, 0, 0);
glClear(GL_COLOR_BUFFER_BIT);
glPushMatrix();
glLoadIdentity();
glColor3f(0.0f,1.0f,0.0f);
glBegin(GL_LINE_LOOP);
const float DEG2RAD = 3.14159/180;
for (int i=0; i < 360; i++)
{
float degInRad = i*DEG2RAD;
glVertex2f(cos(degInRad)*8,sin(degInRad)*8);
}
glEnd();
glFlush();
I am not understanding code is seems to look ok but circle is not coming on screen.
Your circle is too big. The default viewport is in the range [(-1 -1), (1 1)].
BTW, you don't need 360 segments. About 30 is usually adequate, depending on how smooth you want it.

Resources