How to stop event propagation despite WS_EX_NOACTIVATE? - windows

I have a semi-transparent form (using AlphaBlend) that acts as an overlay. For the user to still be able to interact with the window below I have set WS_EX_NOACTIVATE on my form so all right and left clicks go through to the other window.
However I have a few clickable labels on my form. Clicking those and performing the appropriate action works fine since despite the WS_EX_NOACTIVATE flag the OnClick methods are called, but the click will (obviousely) also propagate to the other window, which I do not want in this case.
So, does anyone know how to "stop" the click being sent through to the window below in case I already handled it in my form ? Basically I would like being able to chose whether the click "belongs to me" and does not get propagated or whether the window below mine receives it.

As Rob explained, WS_EX_NOACTIVATE is not relevant here. Most likely you used WS_EX_TRANSPARENT and that made your window transparent to mouse clicks.
To get finer grained control of mouse click transparency, handle the WM_NCHITTEST message in your top level window. Return HTTRANSPARENT for regions that you want to be "click through". Otherwise return, for example, HTCLIENT.

Wm_ex_NoActivate should be irrelevant here. That just controls whether your window receives the input focus. Indeed, if you start with a scratch program and do nothing but change the extended window style, you'll see that when you click within the bounds of that program's window, the clicks are handled in the usual way, except that the window is never activated; programs behind that window do not receive any click events.
Therefore, to make your label controls eat click events instead of forwarding them to the windows behind them, you need to find out what you did to make them start forwarding those messages and simply stop doing that, whatever that is.

Related

wxWidgets pprogrammaticaly move to next input control

I originally had code that set the focus to the first widget in a dialog, in the onInit method. But there were problems with it: if I pressed TAB, indeed focus moved to next control (wxTextCtrl), which got the blue 'focus' color, but the 'focus' color/highlight was not removed from previous focus widget. So now it looked like both first and second control had focus at the same time...
When cycling manually (by pressing TAB) full circle (till last control and then wrap around to the first), suddenly all worked well. That is, when moving focus from first control to next one, the first visually lost focus (blue color was removed) as it should. From now on, only one item had the focus color/highlight.
So instead of setting focus on the first control, I tried a different approach: I set the focus to the last control in the dialog, which is always the OK button. Next, I want to emulate programmatically that a TAB is pressed and received by the dialog. So I wrote this (inside Dialog::onInit):
m_buttonOK->SetFocus();
wxKeyEvent key;
key.SetEventObject(this);
key.SetEventType(wxEVT_CHAR);
key.m_keyCode=WXK_TAB;
ProcessWindowEvent(key);
Now the focus indeed moves away from the OK button, but it does not wrap around to the first control.
Only when I manually press TAB after the dialog opened, the first item gets focus.
Question: why does this wrapping around to set focus on first widget not work with the code shown above?
First of all, your initial problem is almost certainly related to not calling event.Skip() in one of your event handlers, see the note in wxFocusEvent documentation.
Second, you can't send wx events to the native windows, they don't know anything about it. In this particular case you can use wxWindow::Navigate() to do what you want, but generally speaking what you're doing simple can't, and won't, work reliably.

wxHaskell Button State

I'm writing an application using wxHaskell and I want to be able to detect the state of a button (whether or not it is pressed at any given time). I'm having a bit of trouble figuring out how to do this, however. First I thought that there might be a "button is pressed" attribute that I could use, but there didn't seem to be. Then I had the idea of maintaining an IORef which I update on button-up and button-down events. However, that would require that the Button object actually have button-up and button-down events, which is does not appear to. It is an instance of Commanding, but I assume that the command event is fired on button-up only, which isn't enough for that idea. Does anyone have any other suggestions?
Workaround
You can implement this yourself by detecting the low-level actions that trigger those events (eg. mouse button down, space bar down).
In WX you can use the following function and constructor:
mouse :: Reactive w => Event w (EventMouse -> IO ())
data EventMouse = ... | MouseLeftDown !Point !Modifiers
And, as you suggest, you could keep the state yourself in an IORef. My suspicion is that left button here means main button (right for left-handed users).
UI design principles
The second question, which you haven't asked by I'll answer, is whether this is good UI design.
The behaviour of a button (assuming interaction using a mouse) is that click events are reported when the user releases the mouse button in the button area after pressing the mouse button down in the same area. If the user moves away and releases, or presses 'Escape', there is no click.
Taking any action on a button being pressed (not clicked) would feel unnatural for users.
In practice, the only acceptable way to use this would be, imho, to take an action whose effects can only be witnessed after releasing and which is immediately undone if the click is cancelled (ie. mouse button released outside button area).
EDIT: Please, also, take into account that users with accessibility requirements may have OS settings enabled that affect how and when button clicks are reported (but not down/up mouse events).
There is no way to know if a wxButton is pressed or not because it is an abstraction of a push button which intentionally hides this implementation detail. If you need to know the button state, use a wxToggleButton instead.

X11: input focus for popups

For some fun and self-education, I'm tinkering with writing my own X11 toolkit. Here's something that's stumping me.
I have a traditional combo box display element, a typical combo box with a dropdown popup list, like all popular toolkits have.
For the dropdown popup list, I'm creating a new window, a child of the root window, appropriately positioned below the main combo-box display element.
The dropdown popup list is a window in its own full right, that implements keyboard-based navigation, to select the individual entries in the dropdown list.
So, I'm using SetInputFocus to set the input focus to the popup after it opens.
What I find is that when I do that, the window manager then redraws the frame of the main window to indicate that it no longer has input focus. Which is technically true, but I don't see the same results with the more mainstream toolkits, where, in the comparable situation, the main window's frame shows that it still has input focus.
For the pop-up window, in addition to setting override-redirect, I'm also doing everything I can think of, to tell the window manager what's going on: setting the window group leader ID in the popup window's WM_HINTS, setting WM_TRANSIENT_FOR, and setting _NET_WM_WINDOW_TYPE to _NET_WM_WINDOW_TYPE_COMBO; none of that seems to work (I verified that the properties are approriately set, via xprop).
It seems like I have to keep the input focus in the combo box window, and forward keypress and keyrelease events to the display elements in the dropdown popup, which feels clunky. Am I overlooking some property that would tell the window manager that the popup's input focus is linked to the main window's (besides the ones that I've mentioned), that would keep the main window's frame drawn to show that it has input focus, when the input focus is actually in the popup?
Most X11 override-redirect exclusive popup windows (menus, combo boxes, ...) grab the keyboard and/or pointer with either passive or active grab.
See XGrabKey, XGrabKeyboard, XGrabButton, XGrabPointer in the X11 programming manual.
Or maybe don't, because the manual is totally unclear about what the heck these functions are and how they can be used. Search the interwebs for usage examples, probably in other widget libraries. Unfortunately I don't know of a simple informative example offhand.
It is not necessary to call XSetInputFocus at all because all keyboard and/or pointer events are reported to the grabbing clients.

Winapi: window is "sent to back" when using EnableWindow() function

To prevent users from clicking in my main_window when a MessageBox appears I have used:
EnableWindow(main_window,FALSE);
I got a sample MessageBox:
EnableWindow(main_window,FALSE);
MessageBox(NULL,"some text here","About me",MB_ICONASTERISK);
EnableWindow(main_window,TRUE);
The problem is that when I press "OK" on my MessageBox it closes and my main_window is send to back of all other system windows. Why this is happening?
I tried to put:
SetFocus(main_window);
SetActiveWindow(main_window);
after and before : EnableWindow(main_window,TRUE) the result was strange: it worked 50/50. Guess I do it the way it shouldn't be.
Btw.
Is there a better solution to BLOCK mouse click's on specific window than:
EnableWindow(main_window,FALSE);
Displaying modal UI requires that the modal child is enabled and the owner is disabled. When the modal child is finished the procedure has to be reversed. The code you posted looks like a straight forward way to implement this.
Except, it isn't.
The problem is in between the calls to MessageBox and EnableWindow, code that you did not write. MessageBox returns after the modal child (the message box) has been destroyed. Since this is the window with foreground activiation the window manager then tries to find a new window to activate. There is no owning window, so it starts searching from the top of the Z-order. The first window it finds is yours, but it is still disabled. So the window manager skips it and looks for another window, one that is not disabled. By the time the call to EnableWindow is executed it is too late - the window manager has already concluded that another window should be activated.
The correct order would be to enable the owner prior to destroying the modal UI.
This, however, is only necessary if you have a reason to implement modality yourself. The system provides a standard implementation for modal UI. To make use of it pass a handle to the owning window to calls like MessageBox or CreateDialog (*), and the window manager will do all the heavy lifting for you.
(*): The formal parameter to CreateDialog is unfortunately misnamed as hWndParent. Parent-child and owner-owned relationships are very different (see About Windows).

On Windows, should my child window get a WM_WINDOWPOSCHANGED when its z-order changes relative to one of its siblings?

I am trying to insert a custom widget into the Internet Explorer 8 url bar, next to the stop and reload buttons. This is just a personal productivity enhancer for myself.
The "window model" for this part of the IE frame is an "address bar root" window that owns the windows which comprise the IE8 url bar: an edit box, a combo control, and the stop and reload buttons.
From another process, I create a new WS_CHILD window (with a custom class name) that is parented by IE's address bar root window, thus making it a sibling of the edit box and stop/reload. I call SetWindowPos with an hwndInsertAfter of HWND_TOP to make sure it appears "above" (i.e. "in") the urlbar. This works nicely, and I see my window painted initially inside the IE urlbar.
However, when I activate the IE window, the urlbar edit control jumps back in front of my window. I know this is happening because I still see my window painted behind the urlbar, and because when I print ->GetTopWindow() to the debug console on a timer, it becomes the HWND of the urlbar edit control.
If I update my message loop to call SetWindowPos with HWND_TOP on WM_PAINT, things are better -- now when I activate the IE window and move it around, my control properly stays planted above the edit control in the urlbar. However, as soon as I switch between IE tabs, which updates the text of IE's urlbar Edit control, my control shift backs behind the Edit control. (Note: This also happens when I maximize or restore the window.)
So my questions are:
1) Is it likely that IE is intentionally putting its urlbar edit control back on top of the z-order every time you click on a tab in IE, or is there a gap in my understanding of how Windows painting and z-ordering works? My understanding is that once you specify z-ordering of child windows (which are not manipulable by the end-user), that ordering should remain until programmatically changed. So even though IE is repainting its Edit control upon tab selection whereas I am not repainting or otherwise acting upon my window, my window should stil remain firmly on top.
2) Given that the z-order of my window is apparently changing, shouldn't it receive a WM_WINDOWPOSCHANGING/WM_WINDOWPOSCHANGED? If it did, I could at least respond to that event and keep myself on top of the Edit control. But even though I can see my window painting behind the urlbar Edit control when I click on a tab, and even though my debug window output confirms that the address bar root's GetTopWindow() becomes the HWND of the Edit control when I click on a tab, and even though I see WM_WINDOWPOSCHANGING/WM_WINDOWPOSCHANGED being sent to the Edit control with an hwndInsertAfter of HWND_TOP when I click on a tab, my own window receives no messages whatsoever that would allow me to keep the z-order constant. This seems wrong to me, and addressing it would force me to run in IE's process and hook all messages sent to its Edit control just to have an event to respond to :(
Thank you for your help!
It's quite likely that IE is juggling the Z-order of the controls when you change tabs. In IE9, the URL bar and the tabs have a common parent. When you select a new tab, it activates the URL bar (and activation usually brings the window to the top of its local Z order).
No. You get WM_WINDOWPOSCHANGED when a SetWindowPos function acts on your window. If some of the siblings have their z-orders changed, you don't get a message. Nobody called SetWindowPos on your window. You can see this by writing a test program that juggles the z-order of some child windows.
This makes sense because there might be an arbitrary number of sibling windows, and it could be an unbounded amount of overhead to notify all of them. It also would be nearly impossible to come up with a consistent set of rules for delivering these messages to all the siblings given that some of the siblings could react by further shuffling the z-order. Do the siblings that haven't yet received the first notification now have two pending notifications? Do they get posted or dispatched immediately? What if the queue grows and grows until it overflows?
This is different from WM_KILLFOCUS/WM_SETFOCUS notifications in that it affects, at most, two windows. That puts a reasonable bound on the number of notifications. Even if there's a runaway infinite loop because the losing control tries to steal the focus back, the queue won't overflow because there's only one SetFocus call for each WM_KILLFOCUS delivered.
Also, it's reasonable that windows might need to react to a loss of focus. It's much less likely that window C needs to know that B is now on top of A instead of the other way around, so why design the system to send a jillion unnecessary messages?
Hacking the UI of apps you don't control and that don't have well-defined APIs for doing the types of things you want to do is anywhere from hard to impossible, and it's always fragile. Groups that put out toolbars and browser customizations employee more people than you might expect, and they spend much of their day probing with Spy++ and experimenting. It is by nature hacking.

Resources