I was trying to intercept the WM_SIZE message sent by the "X" button on windows mobile 6.5. I know that this message along with the minimize value in wParam can be used to do whatever we want.
However, the problem is, even if I implement my own behaviour for this event, the program gets minimized anyway. I tried putting a breakpoint and stopping execution at the WM_SIZE line, but by then the main app window is minimized.
I there a way to prevent it from minimizing on its own when we click the "X" button?
Can you intercept it under WM_SYSCOMMAND? Look for SC_MINIMIZE and eat the message to keep it from minimizing. We do this to keep applications in a Kiosk configuration.
WM_SIZE is too late, its sent AFTER the window has already been resized/maximized/minimized.
Related
If I register a hook via SetWindowsHookEx(WH_SHELL, ShellProc, ...), what is the meaning of event HSHELL_WINDOWREPLACED? (My Google-fu fails me. I have searched high and low!)
Win32 Docs:
SetWindowsHookEx(): https://learn.microsoft.com/en-us/windows/win32/api/winuser/nf-winuser-setwindowshookexw
ShellProc (callback): https://learn.microsoft.com/en-us/windows/win32/winmsg/shellproc
The offical docs read: A top-level window is being replaced. Weirdly, they also say: Windows 2000: Not supported. Does that mean only supported before or after Win2K?
I created a test driver to watch a Microsoft Windows session, but I was never able to trigger this mysterious event.
I also found a similar event here:
RegisterShellHookWindow: https://learn.microsoft.com/en-us/windows/win32/api/winuser/nf-winuser-registershellhookwindow
... that says:
HSHELL_WINDOWREPLACING: A handle to the window replacing the top-level window.
HSHELL_WINDOWREPLACED: A handle to the window being replaced.
Related:
How can I be notified when a new window is created on Win32?
Why HSHELL_WINDOWDESTROYED, HSHELL_WINDOWCREATED?
In this instance, the term "replace" refers to the occasions when a window stops responding to messages ("hangs") and, after a certain period, Windows hides it and replaces it on-screen with a faded-out copy (called a "ghost window").
Windows does this so that, even when the app is not processing messages, the user can interact with the ghost window to move it around and try to close it.
The wParam value is the handle of the hung window (the one being replaced) and the lParam value is the handle of the ghost window (its replacement).
If the window starts responding again, the notification is sent again, with the window handles swapped around.
I am a beginner with WINAPI and have been trying to understand the Windows messaging system.
While I have learnt that the GetMessage function in WinMain receives all messages sent to a program, I am unable to grasp as to how the API sends a message to a control (say a pushbutton) that it has been clicked by a user? I have gone through tons of pages and am not able to find the exact sequence of messages, starting from the application thread's message queue upto the pushbutton control.
I hope the question is not too "stupid" to merit an answer. Believe me, I have gone through tons of webpages, including MSDN and nowhere am I able to find a straightforward answer. I would really appreciate someone pointing me in the right direction.
When a mouse event occurs Windows searches all the windows on the desktop to find the window that's currently under the cursor. If multiple overlapping windows are under the cursor it picks the topmost window. Child windows are normally on top of their parent window, so this search prefers child windows over their parent window. Windows then posts a mouse event message to the message queue of the window it found.
The program that created the window should have some sort of message loop running in the thread that created the window. This loop normally calls GetMessage to pull messages out of the queue one by one. These messages are passed in turn to DispatchMessage which looks at the message find out which window it should be sent to. It then passes the message to the window by calling its window procedure.
So when you click on a push button control the mouse events are dispatched to the windows procedure for the control. The parent window of the control isn't notified, at least not directly. The button will generate a number of messages as result, some sent to itself, some to its parent. Notably it will send a WM_COMMAND message to let its parent know its been clicked.
The specific sequence of messages that happen when clicking on push button in a top level dialog box is something like:
WM_LBUTTONDOWN: Posted by Windows to the push button when the mouse is clicked over it.
BM_SETSTATE: Sent by the push button to tell itself to draw itself in the pushed state. This give subclassed controls an opportunity to do their own drawing.
WM_CTLCOLORBTN: Sent by the push button to its parent to find out what brush it should be drawn with, which it then ignores. This message does however allow the parent to change the text of the message before its drawn.
WM_LBUTTONUP: Posted by Windows to the push button when the mouse button is released.
BM_SETSTATE: Sent by the push button to tell itself to draw itself in the unpushed state.
WM_CTLCOLORBTN: Sent and ignored as previously
WM_CAPTURECHANGED: Sent by Windows to the push button telling it that it's no longer capturing the mouse. The push button captured the mouse when it received the mouse button down message so it would be notified of the button being released even if the pointer was no longer over the push button.
WM_COMMAND: Sent to the parent to notify it that the push button has been clicked.
Indentation indicates where messages have been sent in response to a message. Posted messages go through the message queue before being dispatched to the window procedure that handles them. Sent messages are sent directly to the windows procedure that process them without going through the queue.
This question might show a fundamental misunderstanding of DirectX programming in Windows, but I'm having a bit of an issue I can't figure out. My program, when running in full screen, sometimes gets in a weird state and I have to force close the app (CTRL+ALT+DEL).
The problem is that when I hit CTRL+ALT+DEL, task manager appears, but I can't use the mouse; the keyboard works at first, but if I click on the Task Manager window with my mouse, it loses focus and I can no longer regain focus. The app also does not minimize itself (Windows app programming issue?)
Is it possible that my app is stealing the exclusive possession of the mouse? I am using DirectInput, but the mouse input is not handled by the app at all. Furthermore, this problem only happens when running the app fullscreen. If I run it in a Window, everything is fine.
If it matters, the tools I'm using are MS Visual Studio 12, Windows 8, and DirectX 9.
The solution to this was to unacquire all input devices and stop the rendering routines when the focus was lost from the application.
I just set the app to keep track of whether or not it has focus, and to adjust the value appropriately in the Windows message pump for the appropriate messages. Specifically, I set focus to "off" when I receive the following messages:
WM_SIZE (when wParam = SIZE_MINIMIZED), WM_KILLFOCUS, WM_ENTERSIZEMOVE, and WM_ENTERMENULOOP
I set focus back on for the following messages:
WM_SIZE (all other cases), WM_SETFOCUS, WM_EXITSIZEMOVE, WM_ACTIVATEAPP with wParam set to true, and WM_EXITMENULOOP
WM_KILLFOCUS is adequate to solve the problem for ALT-CTRL-DELETE-ing out of the application.
I'm trying to pause a DirectX game when the windows loses focus, but the messages seem to be inconsistent.
When using windows mode WM_SETFOCUS and WM_KILLFOCUS messages are received and everything works fine, but these messages are not received when using full screen mode. WM_NCACTIVATE is received when using full screen mode and it works fine, but in window mode is not received when the application is minimized from the taskbar. WM_ACTIVATEAPP also is not received in several cases.
Is there any consistent way of handling the gain/lose focus problem? I want to use only one message that is received in both full screen and window mode.
You should use WM_ACTIVATE for that.
I have made an application already that sends commands to an activated window. I want to be able to use the computer while my process is running because as soon as I switch focus to another window the key strokes being sent via send keys will go to the window I just switched to.
Currently I use FindWindow, IsIconic, and ShowWindow from the Windows API. I have to check to see if the window is there with FindWindow and set my object to the specific window that is returned with that call, I then check if it's minimized with IsIconic and call ShowWindow if it is, and then finally I have to call Interaction.AppActivate to set focus to that window. All of this is done before I even send key strokes. Seems like there should be a way to just send key strokes without having to show the window and activate it. The big thing is while my application is running the key strokes I can't do anything on my computer.
Alright, this is kind of disappointing I'm sure, but you fundamentally cannot do this with 100% reliability.
Windows assumes that the active window is the one getting keyboard input. The proper way to fake keyboard input is with SendInput, and you'll notice that it sends messages to the active window only.
That being said, you can SendMessage WM_KEYUP, WM_CHAR, and WM_KEYDOWN messages and (depending on the WndProc receiving them) maybe get away with it. But remember, its going to break under some circumstances, period.
Sounds like you are using keybd_event() or SendInput(), which both send keystrokes to the currently active window. To direct keystrokes to a specific window, regardless of whether that widnow is focused or not, you need to find its HWND handle first, and then post appropriately-formatted WM_KEYUP/DOWN and WM_CHAR messages directly to it.
once you have the windows HWND, you can directly SendMessage() the WM_KEYDOWN and WM_KEYUP messages to its message queue. The window does not have to be active.
However, understand that this depends on how the target application processes keyboard input. There are several different ways to handle it.
WM_KEYUP/WM_KEYDOWN is most common and some applications only process one or the other (usually WM_KEYDOWN).
WM_CHAR is also fairly common
Some programs use GetAsyncKeyState, GetKeyState, or GetKeyboardState. This is extremely unusual, but effectively prevents keypress injection with SendMessage(). If this is the case fall back to keybd_event() which is directly handled by the keyboard driver. Of course the window will have to be active