Attaching double click event to a label - events

How can I attach a "clicked" event to a label? I tried GtkEventBox but had no luck with it.

Connect to the button-press-event signal on the EventBox.

Gtk# differentiates between Widgets and 'Containers'. Most widgets placed on a Gtk# form will NOT receive mouse click events. In order to receive a mouse event you need to place the widget inside a specific container - like EventBox:
Add an EventBox containter to your form. You can place it behind other Widgets or since it is not visible, unless you specifically select it to be (or change its background color).
Put your label widget inside this EventBox. Notice that the label will get the shape and size of the EventBox.
Add to this EventBox the signal 'ButtonPressEvent' out of the "Common Widget Signals".
If you need to identify the button that was clicked while handling this event, use the uint value in: args.Event.Button typically '1' will be the left mouse button, '2' the center button and '3' the right button ('2' might be also when both left and right buttons are clicked).

It seems you don't need EventBox at all: Since the problem is that labels don't have any associated X11 window, just give it one with set_has_window(True)! The following works for me:
self.label = Gtk.Label()
self.label.set_has_window(True)
self.label.set_events(Gdk.EventMask.BUTTON_PRESS_MASK)
self.label.connect("button-press-event", self.click_label)
The documentation says it should only be used for implementing widgets - but hey, it works. Who cares about "should"?

2019-04-17 Update:
ptomato here is right, GtkLabel is one of the exceptions that indeed requires an eventbox, so you should connect to the button-press-event signal of the eventbox. For other widgets, the set/add events APIs in my original answer should be still relevant.
Original (wrong) answer:
Connect to the button-press-event signal, but directly on the GtkLabel. I'd say you don't need an eventbox here, as GtkLabel already inherits this signal from GtkWidget. To enable the GtkLabel to receive those events, you need first to call gtk_widget_set_events or gtk_widget_add_events, and add the sensitivity to the GDK_BUTTON_PRESS_MASK event.

Related

wxHaskell Button State

I'm writing an application using wxHaskell and I want to be able to detect the state of a button (whether or not it is pressed at any given time). I'm having a bit of trouble figuring out how to do this, however. First I thought that there might be a "button is pressed" attribute that I could use, but there didn't seem to be. Then I had the idea of maintaining an IORef which I update on button-up and button-down events. However, that would require that the Button object actually have button-up and button-down events, which is does not appear to. It is an instance of Commanding, but I assume that the command event is fired on button-up only, which isn't enough for that idea. Does anyone have any other suggestions?
Workaround
You can implement this yourself by detecting the low-level actions that trigger those events (eg. mouse button down, space bar down).
In WX you can use the following function and constructor:
mouse :: Reactive w => Event w (EventMouse -> IO ())
data EventMouse = ... | MouseLeftDown !Point !Modifiers
And, as you suggest, you could keep the state yourself in an IORef. My suspicion is that left button here means main button (right for left-handed users).
UI design principles
The second question, which you haven't asked by I'll answer, is whether this is good UI design.
The behaviour of a button (assuming interaction using a mouse) is that click events are reported when the user releases the mouse button in the button area after pressing the mouse button down in the same area. If the user moves away and releases, or presses 'Escape', there is no click.
Taking any action on a button being pressed (not clicked) would feel unnatural for users.
In practice, the only acceptable way to use this would be, imho, to take an action whose effects can only be witnessed after releasing and which is immediately undone if the click is cancelled (ie. mouse button released outside button area).
EDIT: Please, also, take into account that users with accessibility requirements may have OS settings enabled that affect how and when button clicks are reported (but not down/up mouse events).
There is no way to know if a wxButton is pressed or not because it is an abstraction of a push button which intentionally hides this implementation detail. If you need to know the button state, use a wxToggleButton instead.

Do I have the right idea with using SetCapture() for a windowless checkbox?

My Table control uses windowless checkboxes (because there can be an arbitrary number of checkboxes here). Right now, I use TrackMouseEvent(TME_LEAVE) and manually checking if the mouse is in the checkbox rect during a WM_LBUTTONUP. I have TODOs marked in my code for the edge cases that this causes, such as a missing WM_LBUTTONUP when the mouse has left the client area.
Now I notice today's The Old New Thing says buttons use mouse captures. This got me thinking, and after looking into it, mouse captures would fit what I need more appropriately; if my assumptions are correct it would handle the various edge cases I mentioned above and be more correct in general.
In particular, the assumptions I make are: I should abandon any capture-related operations on a WM_CAPTURECHANGED even if every other condition is met. I will get a WM_CAPTURECHANGED after a ReleaseCapture(). After a SetCapture(), I will always end with either a WM_LBUTTONUP or a WM_CAPTURECHANGED, whichever comes first.
I've read both MSDN and a few articles I've found by Googling "setcapture correct use"; I just want to make sure I've got the right idea and will be implementing this correctly. Do I?
on WM_LBUTTONDOWN
if the button is in a checkbox
SetCapture()
mark that we're in checkbox clicking mode
on WM_MOUSEMOVE
if we are in checkbox clicking mode
draw the checkbox in the pressed state
on WM_LBUTTONUP
if we are in checkbox clicking mode
leave checkbox clicking mode
THEN call ReleaseCapture(), so we can ignore its WM_CAPTURECHANGED
if the mouse was released in the same checkbox
toggle it
on WM_CAPTURECHANGED
if we are in checkbox clicking mode
abandon checkbox clicking mode and leave the checkbox untoggled, even if the mouse is hovering over the checkbox
Do I have the right idea here? And in particular, is my order of operations for WM_LBUTTONDOWN correct? Thanks.
What you have said is basically right, although a real checkbox tracks WM_MOUSEMOVE while in "clicking mode" and displays the checkbox in its original state if the mouse moves off of it. So to emulate that you should have:
on WM_MOUSEMOVE
if we are in checkbox clicking mode
if mouse is over the checkbox
draw the checkbox in the pressed (toggled) state
else
draw the checkbox in the original state

How to stop event propagation despite WS_EX_NOACTIVATE?

I have a semi-transparent form (using AlphaBlend) that acts as an overlay. For the user to still be able to interact with the window below I have set WS_EX_NOACTIVATE on my form so all right and left clicks go through to the other window.
However I have a few clickable labels on my form. Clicking those and performing the appropriate action works fine since despite the WS_EX_NOACTIVATE flag the OnClick methods are called, but the click will (obviousely) also propagate to the other window, which I do not want in this case.
So, does anyone know how to "stop" the click being sent through to the window below in case I already handled it in my form ? Basically I would like being able to chose whether the click "belongs to me" and does not get propagated or whether the window below mine receives it.
As Rob explained, WS_EX_NOACTIVATE is not relevant here. Most likely you used WS_EX_TRANSPARENT and that made your window transparent to mouse clicks.
To get finer grained control of mouse click transparency, handle the WM_NCHITTEST message in your top level window. Return HTTRANSPARENT for regions that you want to be "click through". Otherwise return, for example, HTCLIENT.
Wm_ex_NoActivate should be irrelevant here. That just controls whether your window receives the input focus. Indeed, if you start with a scratch program and do nothing but change the extended window style, you'll see that when you click within the bounds of that program's window, the clicks are handled in the usual way, except that the window is never activated; programs behind that window do not receive any click events.
Therefore, to make your label controls eat click events instead of forwarding them to the windows behind them, you need to find out what you did to make them start forwarding those messages and simply stop doing that, whatever that is.

GTK - Don't highlight buttons on select / hover

I'm trying to write a kiosk GUI in ruby/gtk on ubuntu. I'm pretty fluent in ruby, but new to writing GUIs and not great with linux.
I'm using a touch screen, and am using our own images for buttons, e.g.
button_image = Gtk::Image.new(Gdk::Pixbuff.new "images/button_image.png")
#button = Gtk::Button.new
#button.add(button_image)
#button.set_relief(Gtk::RELIEF_NONE)
My issue is that when the buttons are pressed or remain selected (or hovered over, although this is less relevant with a touch screen), gtk shows fat, square borders around them. Obviously it's applying gtk's prelight / selected / active lighting to the buttons. I've tried changing the button properties in various ways, and also tryied hacking apart my theme, and while I can modify how the highlighting looks, I can't seem to completely get rid of it. Changing the color of the highlight via my theme is easy, but if I remove my setting there's still a default I can't get rid of.
Does anyone know if there's a way to stop it, or possibly make it transparent? Thanks in advance!
Sounds like you want to use exactly your image for the whole button, instead of putting an image inside the normal GtkButton - but still use all the normal behavior of the button.
The easiest way to do this is to just override the drawing. If you are on gtk2, connect to the "expose-event" signal, do your drawing there, and return true so that the default handler doesn't get run. If you are on gtk3, connect to the "draw" signal and do the same.
I tried meddling with the drawing as Federico suggested, but found that the most direct way to address this was instead to use an event box rather than a button. Event boxes accept clicks just like buttons, but don't respond to selecting, hovering, etc. In ruby, the code looks like this:
image = Gtk::Image.new("myfile.png")
event_box = Gtk::EventBox.new.add(image)
event_box.visible_window = false
event_box.signal_connect("button_press_event") do
puts "Clicked."
end
Most of this is exactly like a button; the *visible_window* method, obviously, keeps the event box from being visible under the button image.

GUI: should a button represent the current state or the state to be achieved through clicking the button?

GUI: should a button represent the current state or the state to be achieved through clicking the button?
I've seen both and it sometimes misleads the user. what do you think?
The label on the button should reflect what the button does, i.e. it should describe the change the button makes.
For example, if you have a call logging system a button should say "Close Call" and the user can click it to close the call. The button should not have the label "Call is Open" and the user clicks to change the call status as that's very counter-intuitive, since the button is effectively doing the opposite to what it says on it.
In my opinion the label - and so the function - of a button should rarely, if ever, change. A button is supposed to be a like a physical button and they usually only do a single thing. (There are a few exceptions like play-pause on a media player where it's OK for the button label/icon to change, but at least this is copying a button from a real physical device.)
To carry on the example from above, I would say usually you would want two buttons, "Open Call" and "Close Call" and disable whichever one is not appropriate. Ideally you'd have a field elsewhere displaying the status of the call.
In summary, buttons are for doing things not for passing on information to the user.
The button should represent the action to be executed, not the state.
Some buttons are actions and are not ambiguous, like "Save", "Print" or "Enable user".
When a button represents a state that can be toggled, like Enable and Disable something, I do one of the following:
Change the button text, and make it always point to the state that will be achieved; (i.e. make the button point to actions, not states);
- Keep the button's text the same, but use one of those sticky buttons that will stay pressed, representing that the current state is "on" or "off". I prefer the former approach, though.
It should represent the action taken when clicking the button. States should always be presented by other means.
But I know what you mean. My car radio has buttons with text that shows the current state. It is really confusing.
This depends on the function which will be triggerd by the button click.
if the click changes the state of an entity i would suggest that the button represents the state the entity will enter after clicking the button
if the click triggers some kind of functionality the button should represent the function.
The appearance of the button is also a clue to its state. It should follow the standards of the environment if any exist (example, beveled edge / shadow appears on mouse click in Windows).

Resources