Modeless dialog keyboard handling (winapi) - winapi

I've got an application with a main window which has a bunch of controls, including the spacebar, which is handled by a simple method called onSpacebar(). On top of that main window, I've got a persistent modeless dialog.
I need the spacebar to behave the exact same way, regardless of whether the dialog has focus, or the main window has focus.
This dialog is backed by a DialogProc which looks something like this:
BOOL CALLBACK DialogProc(HWND hDlg, UINT uMsg, WPARAM wParam, LPARAM lParam)
{
switch(uMsg)
{
case WM_NOTIFY:
std::cout<< "WM_NOTIFY" <<std::endl;
switch(LOWORD(wParam))
{
// which component caused the message?
case COMP_TREE:
if(((LPNMHDR)lParam)->code == NM_DBLCLK){
onDoubleclk()
}
//...
break;
// other components...
}
break;
case WM_CLOSE:
// the dialog can only be closed when the whole app is closed
//EndDialog(hDlg, IDCANCEL);
return TRUE;
case WM_DESTROY:
PostQuitMessage(0);
return TRUE;
}
return FALSE;
}
From what I gather, I should call my onSpacebar() method from within the DialogProc, similarily how I handle the double click. I can see that WM_NOTIFY is received by the dialog when the spacebar is pressed (the phrase WM_NOTIFY is printed to cout), but I can't seem to differentiate the spacebar notification from the other numerous notifications the dialog receives.
Please, tell me how to recognize that the particular WM_NOTIFY was in response to a spacebar keypress.

A WM_NOTIFY message is not the standard way that a window processes key press events. When a key is pressed, your window should be receiving WM_KEYDOWN, WM_KEYUP, and possibly WM_CHAR messages. WM_NOTIFY serves an entirely different purpose altogether: passing on a message from a common control to its parent window.
So the fact that you're receiving a WM_NOTIFY message in response to a key press is a fairly unusual thing, explainable when you understand how focus works (which is key to solving your ultimate question).
In Windows, only one window can be focused at a time, and the currently focused window is the one that receives all keyboard input. Thus, if a dialog box has the focus, it will receive key press notifications. If a child control on that dialog box has the focus, it (not its parent dialog) will receive key press notifications. And there is a focusable child control on a dialog box, it will always receive the focus in preference to its parent dialog, therefore it will also always receive key press notifications.
So the likely explanation for your curious WM_NOTIFY messages is that one of the common controls on your dialog has the focus, it is receiving the space key press event, and after processing it, passing on a notification to its parent window (your dialog) in the form of a WM_NOTIFY message. As you might imagine, this is not a reliable method of detecting that the space bar has been pressed.
Instead, you need to figure out some way of trapping key press notifications before they get sent to the focused control. To do that, you'll need to modify your application's message loop to trap WM_KEYDOWN or WM_KEYUP messages before calling either DispatchMessage or IsDialogMessage.
If the key event corresponds to the space bar, you will call your onSpacebar function and indicate that the message was handled, preventing it from being passed on and processed by another window.
If the key event does not correspond to the space bar, then you will need to handle the message as you usually would, ensuring that it does get passed on and processed by the other window.
Since this approach filters out space key presses at a global level, it solves both the problems of child controls on a dialog stealing the key press and the other modeless dialog. However, you do need to be careful because it's very easy to screw things up so that the user can't navigate your dialog using the keyboard at all.
More fundamentally, I think your idea to handle presses of the space bar is fundamentally flawed. The logic of certain common controls basically requires that they process presses of the space bar. For example, consider a textbox: if you filter out all presses of the space bar at a global level, the user will never be able to type a space in a textbox. If you insist on handling the space bar, you will need to check the focused control in your global handler, and if it's a textbox (or other common control that you wish to receive spaces), pass it on; otherwise, handle it yourself.
Honestly, what I'd do instead is choose a more unique key combination (like, I don't know, Ctrl+Space) and set that up as an accelerator. Presumably, your global message loop is already processing accelerator keys by calling the TranslateAccelerator function, so that would take care of all the dirty work for you. No code is even required—you'd do everything simply by editing the accelerators resource file in your project. The MSDN documentation on keyboard accelerators is here, but you'll probably have an easier time consulting your favorite book on Visual C++.

Related

SendMessage(WM_SYSCOMMAND) failed when mouse captured

I'm using the win32sdk, but some messages would never work as expected while mouse was captured by calling SetCapture(), such as:
case WM_LBUTTONDOWN:
SetCapture(hWnd);
SendMessage(hWnd, WM_SYSCOMMAND, SC_MAXIMIZE, 0);
ReleaseCapture();
return 0;
The Window won't be maximized. but why?
Additional,
1. if I use PostMessage() instead, it works.
2. if I use PostMessage() instead and remove the ReleaseCapture(); statement, it doesn't work again.
In general:
PostMessage is asynchronous and you call ReleaseCapture() before WM_SYSCOMMAND is processed. So you have only one question: Why you can't maximize if mouse is captured?
I haven't found any info on this but read here:
http://msdn.microsoft.com/en-us/library/windows/desktop/ms646262(v=vs.85).aspx
"When the mouse is captured, menu hotkeys and other keyboard accelerators do not work." I suppose WM_SYSCOMMAND is not processed also because of this restriction. May be it was done like this to keep coords consistent.
As norekhov said menu hotkeys don't work when the mouse is captured. When the mouse is captured the the only action a user can take that will result in the WM_SYSCOMMAND message being sent is to use a system menu hotkey.
Note that the WM_SYSCOMMAND message is only meant to be used to notify a window of a user initiated action. When you send it to a window, you're in essence trying to mimic a user's action. In this case you don't need to do this. You can tell the window to maximize itself directly:
ShowWindow(hWnd, SW_SHOWMAXIMIZED);
In this case it won't appear to be a user command, so it won't be ignored because the mouse has been captured.

Mouse drag in embbeded Win32 window

In a Qt dialog I have an embedded native Win32 window. As in any standard Win32, I define my own message queue, where, per default, I forward all events to the parent window, and in case an event of interest arrives, I perform some extra work.
My problem is that when I press the left mouse button, then I get the WM_LBUTTONDOWN as expected, but if I keep it pressed, then I get no more mouse clicks events, that is, I get the WM_MOUSE messages, but the mask (wParam), or calling GetKeyState, do not indicate that the mouse key is kept pressed.
The window is created with following parameters,
dwExStyle = WS_EX_TRANSPARENT;
dwStyle &= ~(WS_BORDER| WS_CAPTION | WS_DLGFRAME | WS_THICKFRAME);
CreateWindowExW(0,"Window","Name",dwStyle,
0,0,512,512,
hwndParent,NULL,hInstance,NULL )
When this native window is not embedded in any dialog, it works correctly.
I could also embed this window in a .NET dialog window and observe the same problem.
Any clue what could be going wrong?
Did you mean WM_MOUSEMOVE? Did you try capturing the mouse first?
WM_LBUTTONDOWN is sent only once per click. You'll have to use a boolean to keep track of whether the button is pressed or released during the WM_MOUSEMOVE events. Make use of the WM_LBUTTONDOWN and WM_LBUTTONUP messages together to keep track of this.

Having trouble injecting keyboard events to FLTK

I have a system with no keyboard. I can connect a keyboard, but ultimately the inputs come from a custom keypad which is not a HID device, it sends serial data which I can interpret and decode to determine if the user pressed Up, Down, Left, Right, or Enter.
Right now all I have is a Fl_Window, with two Fl_Button widgets. Focus is set for one of the buttons and callbacks are defined for these buttons. I know that if I attach a real keyboard, and use the arrow keys I can change focus from button to button. I do have to hit SPACE to activate a button.
My problem is determining how to cause these key presses using code when I decode the outcome form the embedded key pad. Because in deployment, there will be no actual keyboard.
What I've tried is to invoke int Fl_Window::handle(int) and not really had success. I've also tried to invoke int Fl::handle(int, Fl_Window *) and not had success.
Here are code examples:
if((ret = Fl::handle(FL_Left, window)) == 0)
That compiles, but I find that I get zero back, implying that it did not process the event.
if((ret = Fl_Window::handle(FL_Right)) == 0)
That does not compile, informing me that it "cannot call member function virtual int Fl_Window::handle(int) without object"
I'm thinking that the "int event" actually ought to be FL_KEYDOWN.
That logic leaves me to wonder though how I "set" event_key(). For instance, there are API functions to get that when one has a handler, but I do not wish to get that; I wish to cause that event to occur.
Is my only option here to figure out how to emulate a HID or make some type of virtual HID where I then cause the keyboard events to occur?
I do not feel I require a handler function in my application, I'm fine with the default behaviors which occur and cause my callback functions to be invoked. My problem is that I can't "cause" these events to occur.
You need to assign the desired key to e_keysym, then dispatch a FL_KEYDOWN event using Fl::handle_(). (Fl::handle() will not generate the followup FL_SHORTCUT event.)
Fl::e_keysym = FL_Left;
Fl::handle_(FL_KEYDOWN, window);
// sleep() and/or Fl::wait() as appropriate
Fl::handle_(FL_KEYUP, window);

return key generates IDOK for button with focus

I did a SetFocus to a button in a dialog. The button gets the dashed outline. When the user presses the return key, the dialog get a IDOK message rather than a message from the button were I set the focus. The same thing happens under other circumstances.
Why is this happening? And how can I cause the return to act as a button press?
Plain c++ windows app, no MFC, no NET.
Feature, not a bug. The [Enter] key operates the button that's marked as the default button for a dialog. Either with the DEFPUSHBUTTON in the .rc file or the BS_DEFPUSHBUTTON style flag. Which is typically the "OK" button so getting IDOK back is expected. The [Escape] key is special that way too, typically the [Cancel] button. This is bound to ring a bell if you think back on how you used dialogs before.
You click a button that has the focus by pressing the space bar instead.
In another SO question I found KB article that might help you:
If a dialog box or one of its controls currently has the input focus,
then pressing the ENTER key causes Windows to send a WM_COMMAND
message with the idItem (wParam) parameter set to the ID of the
default command button. If the dialog box does not have a default
command button, then the idItem parameter is set to IDOK by default.
When an application receives the WM_COMMAND message with idItem set to
the ID of the default command button, the focus remains with the
control that had the focus before the ENTER key was pressed. Calling
GetFocus() at this point returns the handle of the control that had
the focus before the ENTER key was pressed. The application can check
this control handle and determine whether it belongs to any of the
edit controls in the dialog box. If it does, then the user was
entering data into one of the edit controls and after doing so,
pressed ENTER. At this point, the application can send the
WM_NEXTDLGCTL message to the dialog box to move the focus to the next
control.
According to MSDN,
Dialog Box Keyboard Interface
The system provides a special keyboard interface for dialog boxes that carries out special processing for several keys. The interface generates messages that correspond to certain buttons in the dialog box or changes the input focus from one control to another. Following are the keys used in this interface and their respective actions.
...
ENTER: Sends a WM_COMMAND message to the dialog box procedure. The wParam parameter is set to IDOK or control identifier of the default push button.
Since the system intercepts and processes ENTER key pressed directly through the dialog, you'll need to handle it in your dialog box procedure by calling GetFocus() to first see which control has the focus, and perform the appropriate action for that particular control.

Pressing Alt hangs the application

I'm programming a Windows application that doesn't have a menu. Every time I press Alt, it receives the WM_ENTERMENULOOP event and hangs until I press a key.
I've tried other applications without menu (like the MS .chm file viewer) and they exhibit the same behavior.
There is no difference between forwarding the event to DefWindowProc or processing it.
Is there a way to stop Windows from entering the menu loop if there is no menu? Alternatively, is there a way to exit it manually as soon as the event is received?
Process WM_SYSKEYDOWN and WM_SYSKEYUP manually (dont' pass them to DefWindowProc) if you want to disable entering menu loop.
Also, you may want to process WM_SYSCHAR and return TRUE for this message to avoiding beeps for keystrokes like Alt+SomeKey
Scalable code after parsing window message and return correct result with NO call DefWindowProc.
case WM_SYSKEYDOWN:
case WM_SYSKEYUP:
case WM_SYSCHAR:
return (LRESULT)1;
https://learn.microsoft.com/en-us/windows/win32/inputdev/wm-syskeydown
https://learn.microsoft.com/en-us/windows/win32/inputdev/wm-syskeyup
https://learn.microsoft.com/en-us/windows/win32/menurc/wm-syschar
Return value
An application should return zero if it processes this message.

Resources