Handling mouse events dependent on modifiers - user-interface

I've written a few relatively trivial GUIs in WxWidgets and Qt, and I'm persistently unsure how to architecturally handle the following situations:
You catch a mouse event to do something to a graphical object in your GUI
What you do with the object depends on which modifier keys the user is holding down
At the start I usually do something like the following:
void MyClass::mouseMoveEvent(QGraphicsSceneMouseEvent* event)
{
if (event->modifiers() & Qt::AltModifier) {
// do something
} else if (event->modifiers() & Qt::ControlModifier) {
// do something else
} else {
// do yet another thing
}
}
// Repreat ad-nausium for other mouse click/move events...
Eventually, with similar if/else/switch code in lots of mousePressEvent, mouseReleaseEvent handlers, this seems a bit unwieldy so I try and encapsulate some of the repetition by putting the object into different "modes" depending on which modifiers are down. Nonetheless, I still feel I'm missing some nice elegant solution.
I've tried to look at the code of various open-source tools but I've not found anything tangible (erm, simple) enough to point me in a different direction.
Some tools (say, the GIMP) seem to have so many rich and varied tool- and modifier-dependent behaviours that I feel there's got to be a nice way of architecting this pattern.

Event handling in such a GUI toolkits decides what to do according to an event and an event handler you provide. What you need is a way to decide what to do according to an event, modifier and an event handler. So you can based on your events and modifier call special event processing object in all of your standard event handlers for events provided by the toolkit. What you have to implement is that event processing object, which will according to an even and modifier call the right behaviour (event+modifier handler). This is what I would call Chain of responsibility design pattern.

Related

Reacting to patterns of (UI) events over time

What is an expressive way of looking for patterns of events over time, and triggering new events?
For example, user interface events are often built up patterns of simpler events, such as mousedown/up, mousemove, or keyup/keydown.
A drag and drop interaction requires listening for a mousedown event, followed by a number of mousemove events, followed by a mouseup, and looking for if draggable/droppable UI objects are targeted by the different events. Additionally, you might want to have a timing and distance threshold to avoid triggering a drag when the user might have tried to click, and you might want to look for modifier keys, or escape to cancel the interaction.
Dealing with these things as number of individual event listeners quickly gets complex and error prone, tricky to debug, and often leads to conflicts between different events.
What abstractions are common for expressing these patterns succinctly and clearly?
The Composite pattern jumps to mind as a way to structure the code, because you have primitive events which combine to form complex events, while all events conform to a basic interface.
For a behavioral pattern, we intuitively think of an event as being observable; but in the case of a composite event, we may think of it as an Observer as well, where a complex event observes one or more primitive events.

Unity global mouse events

Most Unity tutorials suggest using Mouse events within the Update function, like this:
function Update () {
if (UnityEngine.Input.GetMouseButton(1)) {
}
}
This strikes me as really inefficient though, similar to using onEnterFrame in AS or setInterval in JS to power the whole application - I'd really prefer to use an events based system.
the OnMouseDown() method is useful, but is only fired when the MouseDown is on the object, not anywhere in the scene.
So here's the question: Is there a MouseEvent in Unity for detecting if the mouse button is down globally, or is the Update solution the recommended option?
This strikes me as really inefficient though, similar to using
onEnterFrame in AS or setInterval in JS to power the whole application
- I'd really prefer to use an events based system.
As already pointed out in comments, this isn't necessary less efficient. Every event based system is probably using a polling routine like that behind the scenes, updated at a given frequency.
In many game engines/frameworks you are going to find a polling based approach for input handling. I think this is related to the fact that input update frequency is directly correlated to the frame rate/update loop frequency. In fact it doesn't make much sense to listen for input at higher or lower frequency than your game loop.
So here's the question: Is there a MouseEvent in Unity for detecting
if the mouse button is down globally, or is the Update solution the
recommended option?
No there isn't. Btw if you want you can wrap mouse input detection inside a single class, and expose events from there where other classes can register to.
Something like:
public class MouseInputHandler : MonoBehavior
{
public event Action<Vector2> MousePressed;
public event Action<Vector2> MouseMoved;
...
void Update()
{
if (Input.GetMouseButton(0))
{
MousePressed(Input.mousePosition);
...
}
}
}
Like stated, you can use it without major concerns, Unity will 'make its magic' internally as to set processing power sensitive code execution for you in terms of polling events. That's the beauty of a modern game engine after all. You normally shouldn't have to be hacking your way around a common feature such a mouse click detection.
However if you don't want to go using the main Update() you can make a CoRoutine if you feel more comfortable with that, just bear in mind that Unity coroutines are not multi-threaded neither, so at the end everything needs to wait anyway.

How modal dialog is implemented?

For a long time I have been wondering how modal dialog is implemented.
Let me take Qt as an example. (Nearly all GUI toolkit has this mechanism)
In the main event loop, a slot is called, and in this slot a modal dialog is opened. Before the dialog is closed, the slot doesn't return control to the main event loop. So I thought that the main event loop is blocked and become unresponsive. Apparently this is not true, since when you open a modal dialog, the background main window is still working, like repainting its UI or keep displaying a curve or some graph. It just becomes not to accept any user input.
I did an experiment. I didn't open a modal dialog in the slot, but start a new thread there, and wait for the thread to finish in that slot. This definitely blocked the main event loop.
How modal dialog is implemented after all? How does it keep main event loop unblocked but at the same time blocked the calling slot?
There is only ever a need for a single event loop, and it does not block when a modal dialog appears. Though, I suppose, different toolkits may handle this differently. You would need to consult the documentation to know for sure. Conceptually, however, it all works in the same way.
Every event has a source where the event occured. When a modal dialog appears, the event loop either ignores or redirects all events that originate outside of the dialog. There's really no magic to it. Generally speaking, it's like an if statement in the event loop code that says "if (modal_is_shown() and !event_is_in_modal_window()) {ignore_and_wait_for_next_event()}". Of course, the logic is a bit more complicated, but that's the gist of it.
If you're looking for examples here's another one:
In Tk, there is only ever one event loop. Modal behavior (doesn't have to be dialog, can also be tooltips, textbox etc) is simply implemented by making the main window ignore mouse and keyboard events. All other events like redraws etc. can still be serviced because the event loop is still running.
Tk implements this via the [grab] function. Calling grab on a UI object makes it the only object able to respond to keyboard and mouse events. Essentially blocking all other objects. This doesn't mess with the event loop. It merely temporarily disables event handlers until the grab is released.
It should be noted that Unix-like operating systems running X also has grab built in to the windowing system. So it's not necessarily implemented merely by UI toolkit libraries but is sometimes also a built in feature of the OS. Again, this is implemented by simple blocking/disabling of events instead of instantiating separate event loops. I believe this also used to be the case for the older MacOS before OSX. Not sure about OSX or Windows though. Even though modality is often implemented by the OS itself, toolkits like Qt and Tk often implement their own mechanisms to standardize behaviors across different platforms.
So the conclusion is, it is not necessary to block the main event loop to implement modality. You just need to block the events and/or event handlers.
The answer by https://stackoverflow.com/users/893/greg-hewgill is correct.
However, reading the follow-up discussion between him and https://stackoverflow.com/users/188326/solotim , I feel that there is room for further clarification, by means of prose and some pseudo-code.
I'll handle the prose part with a fact-list:
The main message loop does not run until the modal activity is finished
However, events are still delivered while the modal activity is running
This is because there is a nested event loop within the modal activity.
So far I just repeated Greg's answer, please bear with me as it is for continuity's sake. Below is where I hope to contribute additional, useful info.
The nested event loop is part of the GUI toolkit, and as such, it knows the callback functions related to every window in existence
When the nested event loop raises an event (such as a repaint event directed to the main window), it invokes the callback function associated with that event. Note that "callback" here may refer to a method of a class representing a window, in object-oriented systems.
the callback function does what is needed (e.g., repaint), and returns right back to the nested message loop (the one within the modal activity)
Last, but not least, here's pseudo-code to hopefully illustrate further, using a fictitious "GuiToolkit":
void GuiToolkit::RunModal( ModalWindow *m )
{
// main event loop
while( !GuiToolkit::IsFinished() && m->IsOpen() )
{
GuiToolkit::ProcessEvent(); // this will call
// MainWindow::OnRepaint
// as needed, via the virtual
// method of the base class
// NonModalWindow::OnRepaint
}
}
class AboutDialog: public ModalWindow
{
}
class MainWindow: public NonModalWindow
{
virtual void OnRepaint()
{
...
}
virtual void OnAboutBox()
{
AboutDialog about;
GuiToolkit::RunModal(&about); // blocks here!!
}
}
main()
{
MainWindow window;
GuiToolkit::Register( &window ) // GuiToolkit knows how to
// invoke methods of window
// main event loop
while( !GuiToolkit::IsFinished() )
{
GuiToolkit::ProcessEvent(); // this will call
// MainWindow::OnAboutBox
// at some point
}
}
In general, a modal dialog box of this type is implemented by running its own message loop instead of your application's message loop. Messages directed to your main window (such as timer or paint messages) will still get delivered, even during the modal operation.
In some situations, you may have to be careful that you don't recursively do the same thing repeatedly. For example, if you trigger a modal dialog box on a timer message combined with some persistent flag, you'll want to make sure you don't keep bringing up the same dialog box repeatedly whenever the timer message fires.

Talking Among GWT Panels using UIBinder Layout

New to GWT here...
I'm using the UIBinder approach to layout an app, somewhat in the style of the GWT Mail sample. The app starts with a DockLayoutPanel added to RootLayoutPanel within the onModuleLoad() method. The DockLayoutPanel has a static North and a static South, using a custom center widget defined like:
public class BigLayoutWidget extends ResizeComposite {
...
}
This custom widget is laid out using BigLayoutWidget.ui.xml, which in turn consists of a TabLayoutPanel (3 tabs), the first of which contains a SplitLayoutPanel divided into WEST (Shortcuts.ui.xml) and CENTER (Workpanel.ui.xml). Shortcuts, in turn, consists of a StackLayoutPanel with 3 stacks, each defined in its own ui.xml file.
I want click events within one of Shortcuts' individual stacks to change the contents of Workpanel, but so far I've only been able to manipulate widgets within the same class. Using the simplest case, I can't get a button click w/in Shortcuts to clear the contents of Workpanel or make WorkPanel non-visible.
A few questions...
Is ResizeComposite the right type of class to extend for this? I'm following the approach from the Mail example for TopPanel, MailList, etc, so maybe not?
How can I make these clicks manipulate the contents of panels in which they do NOT reside?
Are listeners no longer recommended for handling events? I thought I saw somewhere during compilation that ClickHandlers are used these days, and the click listener "subscription" approach is being deprecated (I'm mostly using #UiHandler annotations)
Is there an easy way to get a handle to specific elements in my app/page? (Applying the "ID" field in the UI.XML file generates a deprecation warning). I'm looking for something like a document.getElementById() that get me a handle to specific elements. If that exists, how do I set the handle/ID on the element, and how can I then call that element by name/id?
Note that I have the layout itself pretty well nailed; it's the interaction from one ui.xml modularized panel to the next that I can't quite get.
Thanks in advance.
If you don't have a use for resizing events than just use Composite
What you want is what the GWT devs called message bus (implemented as HandlerManager). You can get a nice explanation in the widely discussed (for example, on the GWT Google Group, just search for 'mvp') presentation by Ray Ryan from Google I/O 2009 which can be found here. Basically, you "broadcast" an event on that message bus and then a Widget listening for that event gets the message and does its stuff.
Yep, *Handlers are the current way of handling events - the usage is basically the same so migration shouldn't be a problem (docs). They changed it so that they could introduce custom fields in the future, without breaking existing code.
If you've set an id for any DOM element (for Widgets I use someWidget.getElement().setId(id), usually in combination with DOM.createUniqueId()) you can get it via GWT.get(String id). You'll get then a RootPanel which you'll have to cast to the right Widget class - as you can see it can get a little 'hackish' (what if you change the type of the Widget by that id? Exceptions, or worse), so I'd recommend sticking with MVP (see the first point) and communicating via the message bus. Remember however, that sometimes it's also good to aggregate - not everything has to be handled via the message bus :)
Bottom line is I'd recommend embracing MVP (and History) as soon as possible - it makes GWT development much easier and less messy :) (I know from experience, that with time the code starts to look like a nightmare, if you don't divide it into presentation, view, etc.)

Help with qt4 qgraphicsview

I've done lots of stuff with pygtk however i'm deciding to learn pyqt, im stuck at the qgraphicsview i have absolutley no idea how to get signals from the items i place on the graphics view, primarily mouse events.How do i get the mouse events from idividual items in a scene?
QGraphicsItem is not a QObject and cannot send signals, nor receive slots. Instead, you must handle events. You can do that either through an event filter, sub-classing the view or scene to intercept events or simply sub-classing the items themselves and implementing the event handling functions (see protected member functions in the documentation). Perhaps this example can be of interest: http://doc.trolltech.com/4.6/graphicsview-diagramscene.html .
Right after you create an item, connect the signals you want from it to the instance of the widget that contains it.
Another option is to just give up using signals and have your instance of QGraphicItem directly call a method of its parent by keeping a reference to it. This is less pretty than using signals but ultimately, it gets the job done.

Resources