Windows 11
I have three monitors in front of me. The one on the left has negative screen x coordinates, the centre one starts from zero and the one on the right, a 4K, carries on where the centre one left off. All nice and usual. So I execute
BitBlt(hMem2, 0, 0, bmWide, bmHigh, hdc, pt.x, pt.y, SRCCOPY | CAPTUREBLT);
BitBlt(hMem3, 0, 0, bmWide, bmHigh, hMem2, 0, 0, SRCCOPY);
BitBlt(hMem3, 0, 0, bmWide, bmHigh, hMem1, 0, 0, SRCINVERT);
BitBlt(hdc, pt.x, pt.y, bmWide, bmHigh, hMem3, 0, 0, SRCCOPY);
to read a block of screen (from hdc), add some overlay and write it back at POINT pt.
Does it? Nope.
It appears about 2000 pixels to the right. Consistently, no matter which monitor I put the test program on. The image painted is exactly what I want with all the screen background from where it came from so the first line is using pt correctly. It has my overlay. If it was landing in the right place it would all be wonderful.
All the hMems are loaded with a nice bitmap to hold the image.
I have tried setting
SetMapMode(hMem3, MM_TEXT);
to get 1 logical unit = 1 pixel on all of the HDCs.
I have experimented with versions of
SetThreadDpiAwarenessContext(DPI_AWARENESS_CONTEXT_PER_MONITOR_AWARE_V2);
but all to no avail. So I'm missing something. pt used as source is not the same as pt used as a destination.
Can anybody tell me where I'm going wrong?
Related
If I have 2 monitors that sit side by side and I want to position a window in the upper left of the 2nd monitor, would the correct coordinates be [SCREEN1_WIDTH, 0]? The 2nd monitor sits to the right of the other monitor.
SetWindowPos(myHwnd, 0, SCREEN1_WIDTH, 0, 0, 0, SWP_NOSIZE);
I dont have a second monitor to test this. Then would GetWindowRect(myHwnd, &r) return the same coordinates (absolute coords) or would it be relative to the 2nd monitor?
Background
I'm working a legacy MFC application which uses GDI draw its content.
I need to draw rounded rectangles where each corner has a (potentially) different radius.
This means that I can no longer use RoundRect and have to roll my own using ArcTo.
I'm using SetWindowExtEx, SetWindowOrgEx, SetViewportExtEx and SetViewportOrgExt to implement zooming.
This works fine in most situations.
Problem
On certain zoom levels, my code fails to construct a proper path of the outline of the roundrect.
The following screenshots is of my RoundRect code used to create a path, then used to clip a bigger rectangle (to get an idea of it's shape).
The clipping region created by this path is sometimes missing a corner, clips everything (two missing corners?) or clips nothing.
My guess is that due to rounding errors, the arcs are too small, and is skipped alltogether by GDI.
I find this hard to believe though since it is working correctly for smaller zoom factors than the ones pictured here.
Working correctly:
Missing a corner:
The Code
I have tried to reduce the code needed to reproduce it and have ended up with the following. Note that the number in the screenshots is the value of zoomFactor, the only variable.
You should be able to paste this code into the OnPaint function of a newly created Win32 application project and manually declare zoomFactor a constant.
SetMapMode(hdc, MM_ISOTROPIC);
SetWindowOrgEx(hdc, 0, 40, nullptr);
SetWindowExtEx(hdc, 8000, 6000, nullptr);
SetViewportOrgEx(hdc, 16, 56, nullptr);
SetViewportExtEx(hdc, 16 + (396)*zoomFactor/1000,
48 + (279)*zoomFactor/1000, nullptr);
BeginPath(hdc);
MoveToEx(hdc, 70, 1250, nullptr);
ArcTo(hdc,
50, 1250, 90, 1290,
70, 1250,
50, 1270);
ArcTo(hdc,
50, 2311, 90, 2351,
50, 2331,
70, 2351);
ArcTo(hdc,
1068, 2311, 1108, 2351,
1088, 2351,
1108, 2331);
ArcTo(hdc,
1068, 1250, 1108, 1290,
1108, 1270,
1088, 1250);
CloseFigure(hdc);
EndPath(hdc);
SelectClipPath(hdc, RGN_AND);
HBRUSH br = CreateSolidBrush(RGB(255,0,255));
const RECT r = {0, 0, 8000, 6000};
FillRect(hdc, &r, br);
Here is a simpler bit of code to illustrate the problem:
const int r = 20;
MoveToEx(hdc, 200, 100, 0);
BOOL b = ArcTo(hdc,
100 + 2 * r, 100,
100, 100 + 2 * r,
100 + r, 100,
100, 100 + r);
POINT p;
GetCurrentPositionEx(hdc, &p);
This draws a single corner of radius r. This works fine for non-zero values of r and the position p is correctly updated to match the end of the arc: (100, 100+r), give or take a pixel.
However, when r is zero ArcTo returns TRUE but the position is not updated: p contains the starting position of (200,100).
The documentation states that "If no error occurs, the current position is set to the ending point of the arc." The function returned TRUE indicating success so the position should have been updated.
In my view this a bug. The function should return FALSE because the rectangle is empty so there is no arc and thus no well-defined endpoint. However, it would be more useful in practice if the function returned TRUE and updated the current position to match the final coordinate pair in the parameter list. But it does neither of these things. EDIT: An even better implementation in your case would be to calculate the arc end points in logical coordinates before converting to device coordinates, but GDI in general doesn't work like this.
The problem occurs in your code because your coordinate transformation collapses the second arc's rectangle to an empty rectangle when the zoom is 266. You can see this yourself by adding the following to your code to transform the coordinates of the second arc:
POINT points[4] = {{50,2311},{90,2351},{50,2331},{70,2351}};
LPtoDP(hdc, points, 4);
With the zoom set to 266 the points are transformed to (17,90), (17,91), (17,91), (17,91) so the rectangle has no width and is empty. And you hit the ArcTo bug.
I guess it works for smaller zooms when the rounding happens to put the x-coordinates into adjacent integers rather than the same integer.
A simple fix would be to create a MyArcTo function that replaces the arc with a LineTo when it is too small to be visible.
I'm trying to achieve something simple: Set a translation on X axis, and rotate the object around it's center by a fixed angle.
To achieve this, as far my current knowledge, it's necessary to move the object to the center, rotate, and move back to the original position. Okay. The problem I get although, is that it looks like the object rotate it's local axis and do the last translation along these axis, so it ends in a wrong position.
This is my code:
public void draw(GL10 gl) {
gl.glLoadIdentity();
GLU.gluLookAt(gl, 0, 0, 5, 0, 0, 0, 0, 1, 0);
gl.glTranslatef(x, 0, 0);
gl.glTranslatef(-x, 0, 0);
gl.glRotatef(-80, 0, 1, 0);
gl.glTranslatef(x, 0, 0);
gl.glBindTexture(GL10.GL_TEXTURE_2D, textureId);
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
gl.glFrontFace(GL10.GL_CW);
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, verticesBuffer);
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer);
gl.glDrawElements(GLES20.GL_TRIANGLES, indices.length, GLES10.GL_UNSIGNED_SHORT, indicesBuffer);
}
Before the rotation the object should be at 0,0,0. It rotates correctly. But then it comes near to the screen as if the x axis would be pointing to me (80°).
Note: I let only "opengl" as tag, since this is a general OpenGL question, the answer should not be Android related.
This is the deprecated way of doing this, but I guess that is no excuse for not answering the question.
OpenGL performs matrices multiplications in reverse order if multiple transforms are applied to a vertex. For example, If a vertex is transformed by MA first, and transformed by MB second, then OpenGL performs MB x MA first before multiplying the vertex. So, the last transform comes first and the first transform occurs last in your code.
gl.glPushMatrix();
gl.glTranslatef(globalX, 0, 0);
gl.glTranslatef(localX, 0 ,0);
gl.glRotatef(-80, 0, 1, 0);
gl.glTranslatef(-globalX, 0, 0);
gl.glPopMatrix();
First move from where you are in a hierarchy of transforms to the origin.
Then rotate around that origin.
Apply some local movement along any axis.
Move the object back to its global positioning.
Use glPushMatrix() and glPopMatrix() to undo changes for elements in the same level of relative positioning, this is having the same parent element to which they are relatively positioned.
The push preserves translations from previous (parent) objects that OpenGL applies after operations in the local code above, as it is the order of a common stack (LIFO), in this case the matrix stack.
I'm trying to copy parts of the screen, modify them, and then copy those parts back to the screen. This is in windows, using C++.
The general structure of my code looks like this:
HDC hdcDesktop = GetDC(NULL);
HDC hdcTemp = CreateCompatibleDC(hdcDesktop);
BitBlt(hdcTemp, 0, 0, 100, 100, hdcDesktop, 100, 100, SRCCOPY);
BitBlt(hdcDesktop, rand() % 1920, rand() % 1080, 100, 100, hdcTemp, 0, 0, SRCCOPY);
This should copy a 100x100 portion of the screen starting at (100, 100) to some random part of the screen. This doesn't work, however. What am I doing wrong?
There are a few issues with this code:
As indicated by the docs, CreateCompatibleDC creates a new in-memory image that is 1x1 pixels. This is obviously not big enough for your 100x100 chunk of image. You should probably use CreateCompatibleBitmap.
The coordinates passed to BitBlt are:
top-left cornder of destination (nXDest, nYDest)
width/height of copy (nWidth,nHeight)
top-left corner of soruce (nXSrc,nYSrc)
in that order. You seem to be confusing nXSrc/nYSrc with nWidth/nHeight. Check your numbers.
Wanton abuse of the desktop surface like this may actually (1) be disallowed and (2) produce unexpected results. Be careful what you are attempting to achieve.
Please check this neat piece of code I found:
glEnable(GL_LINE_SMOOTH);
glColor4ub(0, 0, 0, 150);
mmDrawCircle( ccp(100, 100), 20, 0, 50, NO);
glLineWidth(40);
ccDrawLine(ccp(100, 100), ccp(100 + 100, 100));
mmDrawCircle( ccp(100+100, 100), 20, 0, 50, NO);
where mmDrawCircle and ccDrawLine just draws these shapes [FILLED] somehow... (ccp means a point with the given x, y coordinates respectively).
My problem .... Yes, you guessed it, The line overlaps with the circle, and both are translucent (semi transparent). So, the final shape is there, but the overlapping part becomes darker and the overall shape looks ugly.. i.e, I would be fine if I was drawing with 255 alpha.
Is there a way to tell OpenGL to render one of the shapes in the overlapping parts??
(The shape is obviously a rectangle with rounded edges .. half-circles..)
You could turn on GL_DEPTH_TEST and render the line first and a little closer to the camera. When you then render the circle below, the fragments of the line won't be touched.
(You can also use the stencil buffer for an effect like this).
Note that this might still look ugly. If you want to use anti-aliasing you should think quite hard on which blending modes you apply and in what order you render the primitives.