Why does ArcTo sometimes not update the current position - winapi

Background
I'm working a legacy MFC application which uses GDI draw its content.
I need to draw rounded rectangles where each corner has a (potentially) different radius.
This means that I can no longer use RoundRect and have to roll my own using ArcTo.
I'm using SetWindowExtEx, SetWindowOrgEx, SetViewportExtEx and SetViewportOrgExt to implement zooming.
This works fine in most situations.
Problem
On certain zoom levels, my code fails to construct a proper path of the outline of the roundrect.
The following screenshots is of my RoundRect code used to create a path, then used to clip a bigger rectangle (to get an idea of it's shape).
The clipping region created by this path is sometimes missing a corner, clips everything (two missing corners?) or clips nothing.
My guess is that due to rounding errors, the arcs are too small, and is skipped alltogether by GDI.
I find this hard to believe though since it is working correctly for smaller zoom factors than the ones pictured here.
Working correctly:
Missing a corner:
The Code
I have tried to reduce the code needed to reproduce it and have ended up with the following. Note that the number in the screenshots is the value of zoomFactor, the only variable.
You should be able to paste this code into the OnPaint function of a newly created Win32 application project and manually declare zoomFactor a constant.
SetMapMode(hdc, MM_ISOTROPIC);
SetWindowOrgEx(hdc, 0, 40, nullptr);
SetWindowExtEx(hdc, 8000, 6000, nullptr);
SetViewportOrgEx(hdc, 16, 56, nullptr);
SetViewportExtEx(hdc, 16 + (396)*zoomFactor/1000,
48 + (279)*zoomFactor/1000, nullptr);
BeginPath(hdc);
MoveToEx(hdc, 70, 1250, nullptr);
ArcTo(hdc,
50, 1250, 90, 1290,
70, 1250,
50, 1270);
ArcTo(hdc,
50, 2311, 90, 2351,
50, 2331,
70, 2351);
ArcTo(hdc,
1068, 2311, 1108, 2351,
1088, 2351,
1108, 2331);
ArcTo(hdc,
1068, 1250, 1108, 1290,
1108, 1270,
1088, 1250);
CloseFigure(hdc);
EndPath(hdc);
SelectClipPath(hdc, RGN_AND);
HBRUSH br = CreateSolidBrush(RGB(255,0,255));
const RECT r = {0, 0, 8000, 6000};
FillRect(hdc, &r, br);

Here is a simpler bit of code to illustrate the problem:
const int r = 20;
MoveToEx(hdc, 200, 100, 0);
BOOL b = ArcTo(hdc,
100 + 2 * r, 100,
100, 100 + 2 * r,
100 + r, 100,
100, 100 + r);
POINT p;
GetCurrentPositionEx(hdc, &p);
This draws a single corner of radius r. This works fine for non-zero values of r and the position p is correctly updated to match the end of the arc: (100, 100+r), give or take a pixel.
However, when r is zero ArcTo returns TRUE but the position is not updated: p contains the starting position of (200,100).
The documentation states that "If no error occurs, the current position is set to the ending point of the arc." The function returned TRUE indicating success so the position should have been updated.
In my view this a bug. The function should return FALSE because the rectangle is empty so there is no arc and thus no well-defined endpoint. However, it would be more useful in practice if the function returned TRUE and updated the current position to match the final coordinate pair in the parameter list. But it does neither of these things. EDIT: An even better implementation in your case would be to calculate the arc end points in logical coordinates before converting to device coordinates, but GDI in general doesn't work like this.
The problem occurs in your code because your coordinate transformation collapses the second arc's rectangle to an empty rectangle when the zoom is 266. You can see this yourself by adding the following to your code to transform the coordinates of the second arc:
POINT points[4] = {{50,2311},{90,2351},{50,2331},{70,2351}};
LPtoDP(hdc, points, 4);
With the zoom set to 266 the points are transformed to (17,90), (17,91), (17,91), (17,91) so the rectangle has no width and is empty. And you hit the ArcTo bug.
I guess it works for smaller zooms when the rounding happens to put the x-coordinates into adjacent integers rather than the same integer.
A simple fix would be to create a MyArcTo function that replaces the arc with a LineTo when it is too small to be visible.

Related

Mapping an image to a quadrilateral in p5.js without using WEBGL

I'm trying to map a custom image to a 4-sided quad with a non-rectangular shape in p5.js. I know this is possible(and quite easy) using a WEBGL canvas and the texture() command, but I'm trying not to use WEBGL in my code simply because I don't like the WEBGL coding environment; and it seems kind of overkill to swap to a 3D canvas just for this(I don't need any other 3D objects in my program).
I'm looking for an in-built solution, or a custom library with something of this matter in it. I've tried both to some degree and have turned up empty-handed; which is odd because this seems like a relatively simple thing to ask for.
I'm also kind of stupid and I don't understand HTML in general. I use p5.js because of this, but I'm not against any kind of help: all is appreciated.
I've tried using a mixture of shearX() and shearY() but those would only work for an orthographic view; I'm going for perspective.
I have looked into brute-forcing it by literally going through each pixel in the quad and calculating the pixel color it should have based on the image, but haven't had this work yet. It also seems hecka laggy; and I'm looking for this quad to render in real-time.
If you don't want to use WebGL (or p5.js) there are other js libraries that can apply perspective warp via canvas, such as perspective.js.
Here's their example:
// ctx (CanvasRenderingContext2D): The 2D context of a HTML5 canvas element.
// image (Image): The image to transform.
var p = new Perspective(ctx, image);
p.draw([
[30, 30], // Top-left [x, y]
[image.width - 50, 50], // Top-right [x, y]
[image.width - 70, image.height - 30], // bottom-right [x, y]
[10, image.height] // bottom-left [x, y]
]);
This may be bit overkill, but warpPerspective() in opencv.js also support a similar transform.
Here's their example:
let src = cv.imread('canvasInput');
let dst = new cv.Mat();
let dsize = new cv.Size(src.rows, src.cols);
// (data32F[0], data32F[1]) is the first point
// (data32F[2], data32F[3]) is the sescond point
// (data32F[4], data32F[5]) is the third point
// (data32F[6], data32F[7]) is the fourth point
let srcTri = cv.matFromArray(4, 1, cv.CV_32FC2, [56, 65, 368, 52, 28, 387, 389, 390]);
let dstTri = cv.matFromArray(4, 1, cv.CV_32FC2, [0, 0, 300, 0, 0, 300, 300, 300]);
let M = cv.getPerspectiveTransform(srcTri, dstTri);
// You can try more different parameters
cv.warpPerspective(src, dst, M, dsize, cv.INTER_LINEAR, cv.BORDER_CONSTANT, new cv.Scalar());
cv.imshow('canvasOutput', dst);
src.delete(); dst.delete(); M.delete(); srcTri.delete(); dstTri.delete();

Halcon - find edge position, draw line and lineintersection

I'm starting from scratch with Halcon, and I'm not able to solve a problem. I have a Object, need to extract edges from this object, draw a line along the borders and draw a point on the intersection of the lines.
I've tried tresholding, edge, color edge, but It extracts borders everywhere, except the ones I need..
Its just a test i am doing as it is similar to what I have to do later on a real project. But in two days I didnt manage to solve it..
Here is the base image, and the desired result image:
what I have so far:
open_framegrabber ('GigEVision', 0, 0, 0, 0, 0, 0, 'default', -1, 'default', -1, 'false', 'default', 'S1204667', 0, -1, AcqHandle)
set_framegrabber_param (AcqHandle, 'Gain', 1.0)
set_framegrabber_param (AcqHandle, 'ExposureTime', 20000)
set_framegrabber_param (AcqHandle, 'timerDuration', 1)
set_framegrabber_param (AcqHandle, 'BalanceWhiteAuto', 'Off')
set_framegrabber_param (AcqHandle, 'BalanceRatioSelector', 'Red')
set_framegrabber_param (AcqHandle, 'BalanceRatio', 1.22)
set_framegrabber_param (AcqHandle, 'BalanceRatioSelector', 'Green')
set_framegrabber_param (AcqHandle, 'BalanceRatio', 1.00)
set_framegrabber_param (AcqHandle, 'BalanceRatioSelector', 'Blue')
set_framegrabber_param (AcqHandle, 'BalanceRatio', 1.95)
grab_image (Image, AcqHandle)
threshold (Image, Region, 0, 128)
expand_region (Region, Region, RegionExpanded, 15, 'image')
close_framegrabber (AcqHandle)
Based off the original poster being worried about positional movement, I'm posting another answer which is more involved. This strategy might not be the easiest for this case but it is a general strategy that works for a lot of cases. Typically problems like this are solved as follows:
1) Perform a rough location of the part. This usually involves either a blob detection or a matching strategy (correlation, shape based etc). The output of this step is a transformation describing the location of the object (translation, orientation).
2) Based off the found location in step 1, the search regions for detecting features (lines, holes etc) are transformed or updated to new locations. Or the entire image is transformed.
I couldn't post all the code since it was too large. You will have to personal message me if you want me to email you the full HDevelop script. Here are some snippets to give you an idea:
Step 1: Threshold the image and setup search regions where the lines should be found. Only posting code for the first two regions but code is identical for the other three
threshold(Image, RegionThreshold, 0, 100)
region_to_bin(RegionThreshold, ImageBinary, 255, 0, Width, Height)
dev_display(ImageBinary)
*Use the mouse to draw region 1 around first line. Right click when finished.
draw_rectangle2(WindowHandle, Reg1Row, Reg1Column, Reg1Phi, Reg1Length1, Reg1Length2)
gen_rectangle2(Rectangle1, Reg1Row, Reg1Column, Reg1Phi, Reg1Length1, Reg1Length2)
*Use the mouse to draw region 2 around second line. Right click when finished.
draw_rectangle2(WindowHandle, Reg2Row, Reg2Column, Reg2Phi, Reg2Length1, Reg2Length2)
gen_rectangle2(Rectangle2, Reg2Row, Reg2Column, Reg2Phi, Reg2Length1, Reg2Length2)
The search regions look like this:
Step 2: Calculate the intersection of the lines. Only posting code for the first two lines but code is identical for the other three
*get line segment 1
reduce_domain(ImageBinary, Rectangle1, ImageReduced)
edges_sub_pix (ImageReduced, EdgesLine1, 'lanser2', 0.1, 20, 40)
fit_line_contour_xld (EdgesLine1, 'regression', -1, 0, 5, 2, RowBeginLine1, \
ColBeginLine1, RowEndLine1, ColEndLine1, Nr, Nc, Dist)
*get line segment 2
reduce_domain(ImageBinary, Rectangle2, ImageReduced)
edges_sub_pix (ImageReduced, EdgesLine2, 'lanser2', 0.1, 20, 40)
fit_line_contour_xld (EdgesLine2, 'regression', -1, 0, 5, 2, RowBeginLine2, \
ColBeginLine2, RowEndLine2, ColEndLine2, Nr, Nc, Dist)
*Calculate and display intersection line 1 to line 2
intersection_lines(RowBeginLine1, ColBeginLine1, RowEndLine1, ColEndLine1, \
RowBeginLine2, ColBeginLine2, RowEndLine2, ColEndLine2, \
Line1Line2IntersectRow, Line1Line2IntersectCol,
IsOverlappingLine1Line2)
This produces the following output:
Step 3: Create a normalized cross correlation model for finding the object when it undergoes translation or rotation. Here I choose a simple region on the bottom
gen_rectangle1 (ModelRegion, 271.583, 200, 349.083, 530)
reduce_domain (ImageBinary, ModelRegion, TemplateImage)
create_ncc_model (TemplateImage, 'auto', rad(0), rad(360), 'auto', 'use_polarity',
ModelID)
area_center (ModelRegion, ModelRegionArea, RefRow, RefColumn)
Output Image
Step 4: Now we consider what happens when the object is moved. To simulate this I warped the image using a affine transform. I then searched for the normalized cross correlation model created in step 3. Below you can see the object was found. The output is a row, column and angle where it was found. This is converted to a matrix called AlignmentHomMat2D
Some of the code:
threshold(TransImage, RegionThreshold, 0, 100)
region_to_bin(RegionThreshold, ImageBinaryScene, 255, 0, Width, Height)
* Matching 01: Find the model
find_ncc_model (ImageBinaryScene, ModelID, rad(0), rad(360), 0.8, 1, 0.5, 'true', 0,
Row, Column, Angle, Score)
* Matching 01: Display the centers of the matches in the detected positions
dev_display (TransImage)
set_line_width(WindowHandle, 3)
for I := 0 to |Score| - 1 by 1
* Matching 01: Display the center of the match
gen_cross_contour_xld (TransContours, Row[I], Column[I], 20, Angle)
dev_set_color ('green')
dev_display (TransContours)
hom_mat2d_identity (AlignmentHomMat2D)
hom_mat2d_translate (AlignmentHomMat2D, -RefRow, -RefColumn, AlignmentHomMat2D)
hom_mat2d_rotate (AlignmentHomMat2D, Angle[I], 0, 0, AlignmentHomMat2D)
hom_mat2d_translate (AlignmentHomMat2D, Row[I], Column[I], AlignmentHomMat2D)
* Matching 01: Display the aligned model region
affine_trans_region (ModelRegion, RegionAffineTrans, AlignmentHomMat2D,
'nearest_neighbor')
dev_display (RegionAffineTrans)
endfor
The output is as follows:
Step 5: Finally the search regions for locating the original lines are updated based off where the cross-correlation model was found.
Here is the code. Again I'm only showing the first two line segments:
*transform initial search regions
affine_trans_region(Rectangle1, Rectangle1Transformed,
AlignmentHomMat2D,'nearest_neighbor')
affine_trans_region(Rectangle2, Rectangle2Transformed,
AlignmentHomMat2D,'nearest_neighbor')
*get line segment 1
reduce_domain(ImageBinaryScene, Rectangle1Transformed, ImageReduced)
edges_sub_pix (ImageReduced, EdgesLine1, 'lanser2', 0.5, 20, 40)
fit_line_contour_xld (EdgesLine1, 'regression', -1, 0, 5, 2, RowBeginLine1, \
ColBeginLine1, RowEndLine1, ColEndLine1, Nr, Nc, Dist)
*get line segment 2
reduce_domain(ImageBinaryScene, Rectangle2Transformed, ImageReduced)
edges_sub_pix (ImageReduced, EdgesLine2, 'lanser2', 0.5, 20, 40)
fit_line_contour_xld (EdgesLine2, 'regression', -1, 0, 5, 2, RowBeginLine2, \
ColBeginLine2, RowEndLine2, ColEndLine2, Nr, Nc, Dist)
*Calculate and display intersection line 1 to line 2
intersection_lines(RowBeginLine1, ColBeginLine1, RowEndLine1, ColEndLine1, \
RowBeginLine2, ColBeginLine2, RowEndLine2, ColEndLine2, \
Line1Line2IntersectRow, Line1Line2IntersectCol,
IsOverlappingLine1Line2)
This produces the following output:
Halcon has a lot of ways this can be accomplished depending on the requirements. One of the most common techniques for detecting lines is to use the Hough Transform. I've attached a small HDevelop script showing how to get the intersection of two of the lines in your image. The same principle can be used for the others.
One of the most important concepts in Halcon is Regions. The example program first allows you to create two regions by drawing rectangles over top of two of the lines. The regions are black in the image below. On line 8 of the program (draw_rectangle2...) you will need to draw a bounding box around the first line. Right click when you are finished. Line 10 (draw rectangle2...) will expect you to draw a bounding box around the second line. Again right click when finished.
The regions are then combined on lines 13-16 by concatenation. On line 19 (reduce_domain) the domain of the image is reduced to the concatenated regions. You can think of this as a mask. Now when we search for the lines we will only search the part of the image where we created the regions.
emphasized text
read_image (Image, 'C:/Users/Jake/Documents/Stack Overflow/Halcon/Find Edge Position,
Draw Line and Line Intersection/FMuX1.jpg')
get_image_size (Image, Width, Height)
dev_open_window (0, 0, Width, Height, 'black', WindowHandle)
dev_display(Image)
*Use the mouse to draw region 1 around first line. Right click when finished.
draw_rectangle2(WindowHandle, Reg1Row, Reg1Column, Reg1Phi, Reg1Length1, Reg1Length2)
*Use the mouse to draw region 2 around second line. Right click when finished.
draw_rectangle2(WindowHandle, Reg2Row, Reg2Column, Reg2Phi, Reg2Length1, Reg2Length2)
*Generate a single region to search for two lines
gen_rectangle2(Rectangle1, Reg1Row, Reg1Column, Reg1Phi, Reg1Length1, Reg1Length2)
gen_rectangle2(Rectangle2, Reg2Row, Reg2Column, Reg2Phi, Reg2Length1, Reg2Length2)
concat_obj(Rectangle1, Rectangle2, Regions)
union1(Regions, RegionUnion)
*Reduce the domain of the image to the region created in lines 13-16
reduce_domain(Image, RegionUnion, ImageReduced)
* Detect edges (amplitude) using the Sobel operator
sobel_amp (ImageReduced, EdgeAmplitude1, 'thin_sum_abs', 3)
dev_set_color ('red')
threshold (EdgeAmplitude1, Region1, 100, 255)
hough_lines (Region1, 4, 50, 5, 5, Line1Angle, Line1Dist)
dev_set_color ('blue')
* Store input lines described in HNF
gen_region_hline (LineRegions, Line1Angle, Line1Dist)
*Select Line1
select_obj(LineRegions, Line1, 1)
*Select Line2
select_obj(LineRegions, Line2, 2)
*Calculate and display intersection
intersection(Line1, Line2, Line1Line2Intersection)
area_center(Line1Line2Intersection, Line1Line2IntersectArea, Line1Line2IntersectRow,
Line1Line2IntersectCol)
disp_circle (WindowHandle, Line1Line2IntersectRow, Line1Line2IntersectCol, 6)

Translation and rotation around center

I'm trying to achieve something simple: Set a translation on X axis, and rotate the object around it's center by a fixed angle.
To achieve this, as far my current knowledge, it's necessary to move the object to the center, rotate, and move back to the original position. Okay. The problem I get although, is that it looks like the object rotate it's local axis and do the last translation along these axis, so it ends in a wrong position.
This is my code:
public void draw(GL10 gl) {
gl.glLoadIdentity();
GLU.gluLookAt(gl, 0, 0, 5, 0, 0, 0, 0, 1, 0);
gl.glTranslatef(x, 0, 0);
gl.glTranslatef(-x, 0, 0);
gl.glRotatef(-80, 0, 1, 0);
gl.glTranslatef(x, 0, 0);
gl.glBindTexture(GL10.GL_TEXTURE_2D, textureId);
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
gl.glFrontFace(GL10.GL_CW);
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, verticesBuffer);
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer);
gl.glDrawElements(GLES20.GL_TRIANGLES, indices.length, GLES10.GL_UNSIGNED_SHORT, indicesBuffer);
}
Before the rotation the object should be at 0,0,0. It rotates correctly. But then it comes near to the screen as if the x axis would be pointing to me (80°).
Note: I let only "opengl" as tag, since this is a general OpenGL question, the answer should not be Android related.
This is the deprecated way of doing this, but I guess that is no excuse for not answering the question.
OpenGL performs matrices multiplications in reverse order if multiple transforms are applied to a vertex. For example, If a vertex is transformed by MA first, and transformed by MB second, then OpenGL performs MB x MA first before multiplying the vertex. So, the last transform comes first and the first transform occurs last in your code.
gl.glPushMatrix();
gl.glTranslatef(globalX, 0, 0);
gl.glTranslatef(localX, 0 ,0);
gl.glRotatef(-80, 0, 1, 0);
gl.glTranslatef(-globalX, 0, 0);
gl.glPopMatrix();
First move from where you are in a hierarchy of transforms to the origin.
Then rotate around that origin.
Apply some local movement along any axis.
Move the object back to its global positioning.
Use glPushMatrix() and glPopMatrix() to undo changes for elements in the same level of relative positioning, this is having the same parent element to which they are relatively positioned.
The push preserves translations from previous (parent) objects that OpenGL applies after operations in the local code above, as it is the order of a common stack (LIFO), in this case the matrix stack.

Writing to the screen from the screen using BitBlt

I'm trying to copy parts of the screen, modify them, and then copy those parts back to the screen. This is in windows, using C++.
The general structure of my code looks like this:
HDC hdcDesktop = GetDC(NULL);
HDC hdcTemp = CreateCompatibleDC(hdcDesktop);
BitBlt(hdcTemp, 0, 0, 100, 100, hdcDesktop, 100, 100, SRCCOPY);
BitBlt(hdcDesktop, rand() % 1920, rand() % 1080, 100, 100, hdcTemp, 0, 0, SRCCOPY);
This should copy a 100x100 portion of the screen starting at (100, 100) to some random part of the screen. This doesn't work, however. What am I doing wrong?
There are a few issues with this code:
As indicated by the docs, CreateCompatibleDC creates a new in-memory image that is 1x1 pixels. This is obviously not big enough for your 100x100 chunk of image. You should probably use CreateCompatibleBitmap.
The coordinates passed to BitBlt are:
top-left cornder of destination (nXDest, nYDest)
width/height of copy (nWidth,nHeight)
top-left corner of soruce (nXSrc,nYSrc)
in that order. You seem to be confusing nXSrc/nYSrc with nWidth/nHeight. Check your numbers.
Wanton abuse of the desktop surface like this may actually (1) be disallowed and (2) produce unexpected results. Be careful what you are attempting to achieve.

overlapping partially transparent shapes in openGL

Please check this neat piece of code I found:
glEnable(GL_LINE_SMOOTH);
glColor4ub(0, 0, 0, 150);
mmDrawCircle( ccp(100, 100), 20, 0, 50, NO);
glLineWidth(40);
ccDrawLine(ccp(100, 100), ccp(100 + 100, 100));
mmDrawCircle( ccp(100+100, 100), 20, 0, 50, NO);
where mmDrawCircle and ccDrawLine just draws these shapes [FILLED] somehow... (ccp means a point with the given x, y coordinates respectively).
My problem .... Yes, you guessed it, The line overlaps with the circle, and both are translucent (semi transparent). So, the final shape is there, but the overlapping part becomes darker and the overall shape looks ugly.. i.e, I would be fine if I was drawing with 255 alpha.
Is there a way to tell OpenGL to render one of the shapes in the overlapping parts??
(The shape is obviously a rectangle with rounded edges .. half-circles..)
You could turn on GL_DEPTH_TEST and render the line first and a little closer to the camera. When you then render the circle below, the fragments of the line won't be touched.
(You can also use the stencil buffer for an effect like this).
Note that this might still look ugly. If you want to use anti-aliasing you should think quite hard on which blending modes you apply and in what order you render the primitives.

Resources