gwt canvas- handling events altogether differently - events

Gwt HTML5 Canvas wrapper can responds to mouse and keyboard events, it binds to 5 - 6 types of different events, my question is, it is possible to define entirely new event system such as CanvasEvent (and related handler CanvasEventHandler extends to GwtEvent etc), bind this to canvas object and then handle all events differently using a handler interface methods will be something like onDraw(), onDrag(), onMove(), onSelect() etc.
I dont have good clarity of GWT event system but i want to know is this possible without individuality attaching separate event handlers to form a logic for handling my problem, can I access all possible event as one consolidated object and fire custom event based on conditions. What would be the best way to do it, there are threads with GWT custom events but they include senders, whereas in my case sender is already present (Canvas)
Thanks much

Certainly - remember that all of the GwtEvent objects are completely artificial and are based off of events fired from the native JavaScript. That JavaScript event object, (which comes in via onBrowserEvent) gets wrapped up as a ClickEvent or a KeyDownEvent based on the details of the original object, and then fired off via the internal HandlerManager instance. In the same way, you could listen for a MouseDownEvent, set some state, then if a MouseMoveEvent occurs, fire your own CanvasDrag event. You'd probably stop listening to those move events once a MouseUpEvent happens, then you would issue something like a CanvasDropEvent. If isntead the MouseUpEvent occurred right away with no move, you could trigger a CanvasSelectEvent (or you might have something else in mind for that select event).
Each of these new event types you declare then might contain specifics about whatever is going on. For example, while a MouseMoveEvent just has the element that the mouse is over and the x/y coords, you might be indicating what is being dragged around the page. That might be in the form of the shape that was last clicked, or even the data it represents.
Yes, the 'sender', or source, is already there, but it'll be easier to use if you expose some kind of a method to add handlers like addCanvasDragHandler, etc. This is not required - all users of your code could just use addHandler, but it does make it a little ambiguous about whether or not the widget supports the event in question. You would then call fireEvent on the canvas object to make all handlers of that event type listen.
Or, you could make a new class that contains an internal Canvas widget - possibly a Composite object or just something that implements IsWidget (Canvas has a private constructor, so you can't subclass it). This would enable you to add your own particular handlers, and keep your own HandlerManager/EventBus to track the events you are concerned with.

Related

Programming custom GUI OPenGL

I am creating my own GUI in OpenTK.
I want to fire a mouse event when the cursor is, for example, in one of the GUI controls. How can I do that? Because now I'm just iterating through a list of items in the main class, and in the Opentk´s window´s MouseMove event I'm just checking if the mouse coordinates are within the "region" of the component I'm drawing.
This works for now, but I think it could be done in a better way. This way my code is unordered and in the main class, and I would rather have it in the specific component class.
What I would like is to have an event attached to each component of my GUI, so that I can define many events for one component.
I mean, I would like to have for example a button component where I can override or just use a method that fires when an event occurs. Same as OpenGL´s window where you can override events.
This is not a complete answer, because your question is quite broad, but I hope it helps.
In order to implement such a system, here are the core components for a potential design:
UI Components: Some kind of standard interface where different component types can define logic for interactions. Depending on the language, the most common approach is probably something like a parent class Component, with methods to be overridden. These would probably include things like:
Mouse Hover
Mouse Click (press / release)
Click drag
It will also likely need some additional associated information:
Some way to determine the component's location. Could be providing a bounding box, or perhaps a method that tests if a given point is within this component or not.
Information or functionality for drawing the component.
Display and layering settings (is it visible or hidden, should it draw on top of other components or behind).
UI Context: The context is a structure that defines the set of components that are existent in the UI. This could be something like a list structure of Components. In order to build your UI, you would add components to this context. The context will define some behaviour:
Managing components (add / remove / modify).
How to draw the entire context (for example, looping over each component and executing the draw functionality for each).
Handling of events (see next section)
Event Dispatch: To make your UI usable, you can insert an "adapter" layer that handles events from your windowing library (OpenTK) then translates them into usable events for your components and dispatches them. Here is an example of how this might work for a "click" event (pseudo-code):
function TK_Event_ClickPressed(point) {
for component in context {
if component.ContainsPoint(point) {
component.EventClickPressed()
}
}
}
This is actually the more tricky part of the design, in my opinion, because there are some tricky conventions around how component based UI works. You don't necessarily have to follow them, but they're important to be aware of at least because it is probably how people expect your UI to work:
After click press, click drag continues to occur until click release, even if the cursor leaves the component area.
"Actions" occur on click release.
Click release only takes action if the corresponding click press occurred on the same component.
The click release doesn't take any action if the cursor is no longer inside the component (leaving and re-entering the component before release still does the action, though).
You can only be actively clicking one component at a time (the one shown on top), even if multiple components overlap at that spot.
Assuming that you follow these conventions, this means that dispatching events is actually a bit more complicated than just checking if the event point was in a given component or not. You need to maintain some kind of state to keep track of whether the context is currently in a click or not, and which component, if any, is "consuming" the current click. That is, which component should be given the click release and drag events if they occur.
With these systems in place, you just need to create a window, create a UI context, register the adapter layer to the window to act on that context, set up the window to draw the context on frame, then use the context to add / remove / modify components in your program.

Can receive pointerDragged but not pointerPressed in a container

I have a UI in Codenameone where a container contains another container which contains some widgets. On the bottom level container I'm able to receive pointerDragged events but not pointerPressed. These (pointerPressed) seem to be consumed by the widgets on the top of the hierarchy, but not move down to the bottom container.
How can I fix this?
I'd like to do this to detect left-right swipes on the bottom container. Is there perhaps a better way to do that?
Only focusable components receive the events directly in Codename One otherwise we would have to deliver events to multiple components which might inhibit performance.
The best way to do this is to use an existing component, e.g. Tabs supports swiping and hiding the tabs so you can just use that.
Making the Container focusable is probably not as desireable (although possible) since it might make interaction with the components within difficult.
You can use a pointer listener on the parent form as the Tabs component does internally. This should always work since the form gets all the events: https://github.com/codenameone/CodenameOne/

Update NSView in function of events

I have a main view (subclass of NSView) and as i'm new to cocoa, i'd like to know how to update the view in function of events.
I know there are many methods that take events such as -(void)mouseMoved:(NSEvent*)event or - (void)mouseClicked:(NSEvent*)event My algorithm to determine what to do is ready. I want to know where I should update the main view: is it in the -(void)mouseMoved:(NSEvent*)event or in the - (void)drawRect:(NSRect)dirtyRect. And if it is in drawRect, then how should i pass the information to it?
Thanks in advance!
Here's a quick explanation that will hopefully get you on your way:
Handle events
User actions are communicated to your views and windows by events (keyboard + mouse) and actions (events interpreted by buttons and other controls). Your views should react to these events and actions by updating the model, which are the lower-level data structures that represent whatever your program displays to the user. If Cocoa, the view typically communicates through a controller object to make changes to the model.
Invalidate display / trigger redraw
After you have updated your model, you need to inform the view that it needs to redraw. This can be done in several ways, but the simplest way to do it is -setNeedsDisplay:YES. This will ensure that at some point in the immediate future, your view will redraw itself to display the updated model data.
Draw
At some point Cocoa will call -drawRect: on your view. Inside -drawRect:, you should read the requisite data from your model and draw the necessary graphics. You shouldn't do any manipulation of the model in this method.

What's the difference between logical events and native events in GWT?

I notice that there are two methods by which an event handler can be hooked up to a GWT widget: addHandler and addDomHandler. The JavaDoc for addDomHandler says, "Adds a native event handler to the widget and sinks the corresponding native event. If you do not want to sink the native event, use the generic addHandler method instead."
I'd be very grateful if someone would enlighten me as to the difference between native events and logical events.
Native events are fired directly by the browser - events like clicks, mouseovers, keypresses, etc. To receive those events on a Widget, you have to specifically sink the events.
The generic events are, well, more generic. For example, I've created a SaveEvent and a DeleteEvent for my own use, that get fired when certain UI conditions are met. They are farther away from the browser and would never get fired directly by the browser. I think you should stick with the more generic events when you can. On the other hand, if you're creating a custom widget that you can't make out of other widgets - for example, if you want to build a slider that the user can click and drag - you'll need the DOM events.

Options for keeping models and the UI in sync (in a desktop application context)

In my experience I have only had 2 patterns work for large-scale desktop application development when trying to keep the model and UI in sync.
1-An eventbus approach via a shared eventbus command objects are fired (ie:UserDemographicsUpdatedEvent) and have various parts of the UI update if they are bound to the same user object updated in this event.
2-Attempt to bind the UI directly to the model adding listeners to the model itself as needed. I find this approach rather clunky as it pollutes the domain model.
Does anybody have other suggestions? In a web application with something like JSP binding to the model is easy as you ussually only care about the state of the model at the time your request comes in, not so in a desktop type application.
Any ideas?
I am currently using the event bus approach to synchronize the models and the UI in my application, but I have hit a hurdle with it in that it's difficult to make it very fine grained, for example, at the property level where you are just interested in knowing if property x of an object gets updated, and there are hundreds or thousands of such cases.
For such a fine grained control, you might want to checkout how KVC (Key Value Coding) and KVO (Key Value observing) works in Cocoa. It basically allows an object to observe any other object's properties as long as it uses some basic principles of KVC. The interested objects automatically get notified upon changes, and you don't have to explicitly notify the observing objects on each property change as that is taken care of by the underlying implementation of KVO. It is somewhat similar to the PropertyChange listeners in Java beans.
If there is too many observations going on, and writing the glue code to update models/views on property changes becomes problematic, you might want to take it a step further and have data-binding to keep models and views synchronized. Built upon the concepts of KVO, the idea is to bind properties of objects, so that a change in one automatically updates the other, and vice versa. For example, you could bind the text in SO's answer field, to the answer preview that we see right below.
.bind('answer.value', 'answerPreview.text')
Both happen to be view elements in this case, so data-binding is a generic approach, and can be used to bind objects more appropriately and not just UI with models.

Resources