is it safe to store the calculated RECT from a WM_NCCALCSIZE in a class member RECT variable and use it later in WM_NCPAINT? - windows

I need to know if it is safe to store some calculated non-client area RECT (not the window rect neither the client one, it is a based on some calculation) from the WM_NCCALCSIZE message and use it later in the WM_NCPAINT without the need to do the whole recalculation process again in WM_NCPAINT!?
i.e. is WM_NCPAINT always called immediately after WM_NCCALCSIZE?
I need to save the hassle of doing the recalculation process in WM_NCPAINT message, because DefWindowProc from WM_NCCALCSIZE already does all what I need to start my calculations based on what it does.
TIA.

i.e. is WM_NCPAINT always called immediately after WM_NCCALCSIZE?
I test this sample which provided by krsi, we can find that WM_NCCALCSIZE is always called before WM_NCPAINT, but this does not mean that no other messages were called in the middle. For example, when testing, after receiving WM_NCCALCSIZE, WM_PAINT will be called first, and then WM_NCPAINT will be called.
You seem to be customizing the non-client area, I think you can first read Nonclient Area.
In general, processing these messages for standard windows is not
recommended, because the application must be able to draw all the
required parts of the nonclient area for the window. For this reason,
most applications pass these messages to DefWindowProc for default
processing.
An application that creates custom nonclient areas for its windows
must process these messages. When doing so, the application must use a
window device context to carry out drawing in the window. The window
device context enables the application to draw in all portions of the
window, including the nonclient area. An application retrieves a
window device context by using the GetWindowDC or GetDCEx function
and, when drawing is complete, must release the window device context
by using the ReleaseDC function.
If you just consider whether the size of the custom client area will change when WM_NCPAINT is called, I can't give a suitable answer because I haven't seen your code and I don't know what kind of project you need to complete.
I just tested the sample in another link, the non-client area will always maintain the border size of 4pixel, and the calculation is also performed in WM_NCCALCSIZE.

Related

What context should I refer to so I am able to BitBlt on top of any application or window? Is there a "general" context which refer to display?

So I`m very new to win32ui, basically just starting. I was once using BitBlt wit python win32api module and as far as I remember to draw on top of display (so any application - if they are opened) I had to get specific context handle. But my memory is hazy on whether it simply was NULL or was it some specific context? Null doesn't seem to work, so I wonder how to obtain that general context? I really want to avoid to create fully transparent un blocking window.
The GetDC API allows you to get a device context for any given window. Alternatively,
If [hWnd] is NULL, GetDC retrieves the DC for the entire screen.
You can use the device context for the entire screen to read from, reliably (with restrictions). Rendering into a device context for a window you do not own won't be reliable, though. While it won't fail straight away, the window owner can overwrite your rendering at any point. There's no way for you to even be notified about this.
If you need to render on top of the screen you will have to create a top-most (transparent) window yourself, and use its device context. Make sure you ask the question: What if two programs did this?

Troubleshoot why PrintWindow is blank

I am trying to capture screenshot of inactive window with PrintWindow. It works correctly for calculator and for capturing Google Chrome, but for some other applications, like games, it saves white area.
What could be the reasons for PrintWindow to fail and how to validate them?
EDIT: I want tool to report why window can not be captured
Every window has a default implementation for WM_PRINT, you will use it if you call PrintWindow() and don't use the PW_CLIENTONLY flag. Provided by the default window procedure, DefWindowProc(). It is pretty straight-forward, it creates a memory DC and sends WM_PAINT. So what you get in the bitmap is the same you get on the screen. Everybody happy.
But that only works for windows that actually use WM_PAINT to render their content. The chips are down when it doesn't, most any game actually renders with DirectX and does so at a high rate, not relying on WM_PAINT messages. A program that uses a layered window with alpha transparency doesn't either, you can typically recognize them from a fancy blended border.
What such a program should do is write their own message handler for WM_PRINT/CLIENT and render their surface into the device context. Necessary because the default implementation doesn't know beans about DirectX surfaces.
Games just don't do this, that's extra code they have to write that just about nobody ever actually uses. You'll inevitably end up with an empty bitmap. Nothing you can do about it of course.
The documentation for PrintWindow provides information on its implementation:
The application that owns the window referenced by hWnd processes the PrintWindow call and renders the image in the device context that is referenced by hdcBlt. The application receives a WM_PRINT message or, if the PW_PRINTCLIENT flag is specified, a WM_PRINTCLIENT message. For more information, see WM_PRINT and WM_PRINTCLIENT.
Whether or not PrintWindow returns a window's content is subject to the window procedure handling the WM_PRINT or WM_PRINTCLIENT message[1] appropriately. If a window doesn't handle either of those messages, it will not render anything into the provided device context.
[1]Standard window implementations provide a message handler for WM_PRINT/WM_PRINTCLIENT through DefWindowProc. Custom window class implementations need to provide their own.

How to stop OpenGL from pausing when the window is out of focus or resizing?

I'm trying to prevent my rendering from stopping when my window is out of focus or resizing. In addition to the resizing, if I resize my window smaller, then bigger again anything that wasn't visible when it was smaller is now black. Is there any way to fix this?
There are really two distinct things going on here. The moving/resizing problem is caused by the windows DefWindowProc function which applications use to handle messages that aren't explicitly handled by the application itself. In the case of moving or resizing, it blocks, creating a new message queue to handle most messages, but there are a few that it will still dispatch to the application's main event queue, like WM_TIMER. You can find lots more information in this answer.
The second issue is that your program only "owns" the pixels inside your window, and only those that are not covered up by other windows. When you make the window smaller, the pixels at the edge need to be overwritten to show the window border or whatever's behind the window. When the window is made bigger, some drivers automatically set the newly acquired pixels to black, while others leave them at whatever they were before (usually part of the window border). The OS doesn't remember if those pixels had some specific color the last time the window was that size, because most of the time the program doesn't care. Instead, windows sends a WM_PAINT message to indicate that the application should redraw the window. Your program should either directly handle this, or use a library like GLFW that abstracts it. In addition, you need to handle resize events by calling glViewport with the new size of the window.

Fool the target app's HWND into thinking mouse moved

I am trying to emulate simple mouse movement in a window belonging to another process. My app uses global hooks to inject DLL into the target process (WH_CBT and WH_GETMESSAGE) and the injection works like a charm. The intention is to fool the target process into thinking the mouse went over a portion of the screen. When I do a movement with the physical mouse, this triggers a certain app behavior (e.g. a tooltip is being shown). I would prefer if the actual mouse pointer remained in its current position when I perform the "trick".
I have established message monitoring with Spy++. Sending (or posting) plain WM_MOUSEMOVE messages to the target HWND is registered by Spy++ but has no desired effect. When the mouse is physically moved, the app does its thing. I have tried sending some other messages in conjunction to WM_MOUSEMOVE (e.g. WM_SETCURSOR) but things didn't improve. I have even hijacked GetCursorPos in the target process to return the same coordinate as posted in WM_MOUSEMOVE (former is screen, latter is client) but this didn't help either.
When I do a simple SetCursorPos, the app does what it's supposed to do. What other magic am I missing that the SetCursorPos is doing? The messages captured by Spy++ look more or less the same in both scenarios.
Any suggestions on how to send mouse movement are welcome. I do not want to use SendInput, mouse_event or other APIs. I need to target a specific HWND for a very brief period of time.
Usually a tooltip is shown as a result of the WM_NOTIFY message, which is sent with the TTN_SHOW notification code. Have you tried it?

How does a GUI Framework work?

I have been all over the web looking for an answer to this, and my question is this:
How does a GUI framework work? for instance how does Qt work, is there any books or wibsites on the topic of writing a GUI framework from scratch? and also does the framework have to call methods from the operating systems GUI framework?
-- Thank you to any one who takes the time to try to answer this question, and forgive me if i misspelled anything.
In the old days we did a lot of GUI programming from scratch. It is not as hard as it seems, but it requires a few weeks to come with results.
First you need a good drawing library. Minimal functionality for this library is drawing clipped rectangles (using patterns), lines, bitmaps, and fonts. You can cheat by creating fonts as bitmaps, and clipped rectangle is just a bunch of horizontal lines.
Now you need at least drivers for mouse, keyboard, and timer (if not already provided by the operating system). In general, you will need to detect keys, symbol keys (such as shift, etc.), mouse moves and mouse clicks. Basic timer functions will allow you to detect double clicks.
Then you need to create a window data structure. This data structure needs to have coordinates i.e. a rectangle, link to parent window (if not top window), and window function i.e. the function that will be called when this window should handle some events.
Once you can draw on screen you need some rectangle algebra functions. You need at least good function to calculate intersection of rectangles, and a quick resolution of relative to absolute coordinates. For example - if your child window has parent then its' x and y should recursively be added to parent x and y until you reach top window.
At this point you have your:
- primitive graphical functions,
- window structure,
- mouse driver, keyboard driver, and timer,
- rectangle arithmetic.
Now you can write your main event harvesting function. This function will run all the time. It's purpose will be to detect events and send messages to correct windows. What is an event? Well, when you start your program, store mouse x and y coordinates. Then in a loop check if they have changed. If they have changed, find the window at that position ... and send WM_MOUSEMOVE event to it. Your harvesting function should handle:
- mouse moves
- mouse clicks
- mouse double clicks (remember last click and position, measure time and decide if it is a double click or not)
- timer events
- keyboard buffer changes
...
Now you should be able to send events to windows. But you really need a mechanism for it. It is a combination of message queue, and window procedure. It usually works like this: each window has a window procedure which commonly accepts four arguments: message id (i.e. is it mouse move, is it paint message), window handle, parameter 1 and parameter 2. You can call this window procedure directly using something like a send_message functions. Or you can send this window a message via post_message function. This will put message to the queue and window will process messages one by one, eventually receiving this one. So why should you call one messages directly and put others to the queue? Because of priority. You see, a keyboard click can wait some time before being processed. But a window redraw must complete immediately to prevent flicker and wrong data on screen.
So your harvest_events function sends messages to windows using post_message, and send_message. And your window message pump gets them using typical message pump like this:
while (pmsg = get_message() != NULL) send_message(pmsg->id, pmsg->hwnd, pmsg->p1, pmsg->p2);
get_message simply obtains message from the queue, and calls send message. Simple, huh? Well, not quite so. This way you would only receive driver messages to windows, but you also need some functions to redraw windows, move them, etc.. When you create move_window function, resize_window, show_window, and hide_window function, your window coordinates will change. Parts of other windows will be uncovered (if top window is moved or closed).You need to calculate which windows are affected by coordinate changes and send paint message to those windows (to repaint only the parts that were uncovered - remember, you have clipping drawing functions so this will work).
These functions introduces messages msg_paint, msg_move, msg_resize, msg_hide...
Last, you need to maintain hierarchy of windows. Your top window should be the desktop. It should have child windows (application top windows). These windows may have further child windows (buttons, edit boxes, etc.) The obvious structure for holding these is the window tree. When you detect mouse click you have to traverse window tree and do it in a smart way (finding out who has focus, who is modal, etc.) to send message to the right window. And when you draw you also must traverse all children to see who is uncovered and who is not. Last but not least, you need to handle mouse rectangle as top window to prevent flickering the mouse as windows are re-drawn or (using timers and msg_paint events) animated.
That's roughly it.
A GUI framework like Qt generally works by taking the existing OS's primitive objects (windows, fonts, bitmaps, etc), wrapping them in more platform-neutral and less clunky classes/structures/handles, and giving you the functionality you'll need to manipulate them. Yes, that almost always involves using the OS's own functions, but it doesn't HAVE to -- if you're designing an API to draw an OpenGL UI, for example, most of the underlying OS's GUI stuff won't even work, and you'll be doing just about everything on your own.
Either way, it's not for the faint of heart. If you have to ask how a GUI framework works, you're not even close to ready to design one. You're better off sticking with an existing framework and extending it to do the spiffy stuff it doesn't do already.

Resources