Stopping clicks on object on one stage going through to the nextStage in EaselJS / CreateJS - events

I have two canvases and two stages in CreateJS / EaselJS. The first stage has autoClear set to false and I am doing dynamic drawing on it starting with a stagemousedown event. The second stage uses nextStage to send mouse events to the first stage. The second stage has interface such as a Bitmap that I want to press on to go to another page. When I click on the Bitmap, the stage beneath does the dynamic drawing. I want the click on the Bitmap not to go through to the first stage but stopImmediatePropagation does not work, nor does putting a clone of the Bitmap with mouseEnabled false on it underneath. I can just use mousedown on the Bitmap so the user does not notice as much, but was wondering if there is a way to disable mouse events from passing through the top stage if they are acting on an object with an event set to capture? Thanks in advance.

The stagemousedown and other stage events are decoupled from the EaselJS object event model. They are catch-all events, which basically represent mouse interaction with the . Because of this, catching and stopping these events won't interrupt the display list hierarchy.
Typically if you want to block other events in the same stage, you can create a canvas-size box (Shape, etc) that will block the interaction. When dealing with nextStage, this is especially true, since we are passing on events that are unhandled by objects in the EaselJS display list.
A better approach might be to toggle the nextStage on stagemousedown, so it is null during the click event. Not sure if this will work, but its a start.

Related

How to webgl detach transform control when click another range not transform control

I'm developing webgl graphics with three.js
While I developing, I'm facing something stuck.
I'm trying to detach transform control when I click another range(ex, grid or another objects) not transform control.
I implemented it by checking clicked position.
For example, I set event handler with mouse down and in there, I store that mouse position. Also I set event handler with mouse up and in there, I compare mouse positions both mouse down and mouse up. If two positions are same, I detach the transform controller. However with this implementation, transform controller was disappeared though I clicked transform controller.
I just wanna detach transform control when I click another position not transform control
Does anyone help me please with this issue?
Thank you in advance.

Programming custom GUI OPenGL

I am creating my own GUI in OpenTK.
I want to fire a mouse event when the cursor is, for example, in one of the GUI controls. How can I do that? Because now I'm just iterating through a list of items in the main class, and in the Opentk´s window´s MouseMove event I'm just checking if the mouse coordinates are within the "region" of the component I'm drawing.
This works for now, but I think it could be done in a better way. This way my code is unordered and in the main class, and I would rather have it in the specific component class.
What I would like is to have an event attached to each component of my GUI, so that I can define many events for one component.
I mean, I would like to have for example a button component where I can override or just use a method that fires when an event occurs. Same as OpenGL´s window where you can override events.
This is not a complete answer, because your question is quite broad, but I hope it helps.
In order to implement such a system, here are the core components for a potential design:
UI Components: Some kind of standard interface where different component types can define logic for interactions. Depending on the language, the most common approach is probably something like a parent class Component, with methods to be overridden. These would probably include things like:
Mouse Hover
Mouse Click (press / release)
Click drag
It will also likely need some additional associated information:
Some way to determine the component's location. Could be providing a bounding box, or perhaps a method that tests if a given point is within this component or not.
Information or functionality for drawing the component.
Display and layering settings (is it visible or hidden, should it draw on top of other components or behind).
UI Context: The context is a structure that defines the set of components that are existent in the UI. This could be something like a list structure of Components. In order to build your UI, you would add components to this context. The context will define some behaviour:
Managing components (add / remove / modify).
How to draw the entire context (for example, looping over each component and executing the draw functionality for each).
Handling of events (see next section)
Event Dispatch: To make your UI usable, you can insert an "adapter" layer that handles events from your windowing library (OpenTK) then translates them into usable events for your components and dispatches them. Here is an example of how this might work for a "click" event (pseudo-code):
function TK_Event_ClickPressed(point) {
for component in context {
if component.ContainsPoint(point) {
component.EventClickPressed()
}
}
}
This is actually the more tricky part of the design, in my opinion, because there are some tricky conventions around how component based UI works. You don't necessarily have to follow them, but they're important to be aware of at least because it is probably how people expect your UI to work:
After click press, click drag continues to occur until click release, even if the cursor leaves the component area.
"Actions" occur on click release.
Click release only takes action if the corresponding click press occurred on the same component.
The click release doesn't take any action if the cursor is no longer inside the component (leaving and re-entering the component before release still does the action, though).
You can only be actively clicking one component at a time (the one shown on top), even if multiple components overlap at that spot.
Assuming that you follow these conventions, this means that dispatching events is actually a bit more complicated than just checking if the event point was in a given component or not. You need to maintain some kind of state to keep track of whether the context is currently in a click or not, and which component, if any, is "consuming" the current click. That is, which component should be given the click release and drag events if they occur.
With these systems in place, you just need to create a window, create a UI context, register the adapter layer to the window to act on that context, set up the window to draw the context on frame, then use the context to add / remove / modify components in your program.

KineticJS: Clicks to background layers stop firing after draw

KineticJS seems to have an issue with handling clicks on background layers after redrawing the stage.
I have a jsfiddle with a minimal example of this problem. http://jsfiddle.net/Z2SJS/
On line 34 I have:
stage.draw()
If this is commented out, events fire as they should. When this is present, after dragging the click events to the background will stop firing.
I know that in this example I am not doing anything that would require me to redraw the stage, but in my project I am using the dragstart and dragmove events to manipulate objects on multiple layers, and I then lose reference to my background clicks.
Is there something I need to do to ensure that redrawing the stage does not cause my events to stop firing?
Instead of using stage.draw() use foreground.draw()
here is the updated fiddle
Alternately: set dragOnTop: false inside the circle instantiation. Fiddle2

using drag events to switch rectangles in a grid in windows phone 7.5

I've run in a bit of a pickle with my puzzle game for windows phone.
I want to change between two adjutant rectangles, both on the same grid.
The tap event was easily implemented, but implementing drag seems to be a really big pain.
I'm also using a custom user control to get the rectangles on the grid, so i need to create custom delegates before attaching events to my rectangle matrix.
I am currently using the manipulation completed and manipulation started events to implement the drag gesture, but there are a couple of problems:
1) i have to tell the difference between tap and actual drag, both which are covered by the manipulation completed event. This is the way I do it right now:
if (e.TotalManipulation.Translation.X == 0 && e.TotalManipulation.Translation.Y == 0)
{
}
else
{do drag stuff here}
however, the do drag stuff here part does not seem to work, even if the transitions are different from 0; It always executes the tap event.
I am currently stacked in using manipulation events, because, as i said, I am using a custom control as an object prototype for my rectangle matrix, and i need custom delegates for that, and apparently, the GestureListener has no constructors for its event classes.
So, any suggestion on how to do this?
I figured the answer just after posting this question.
You can actually attach a gesture listener to a custom control and create custom delegates, by sending the drag gesture event parameter from the gesture listener drag event to the delegate you create and it works.

How does the Outlook app delete checkbox UI xaml code work?

If you tap on the left hand side of the screen in Outlook then an event is triggered (in this case a checkbox appears).
I would like to know the xaml on how this is achieved. It cannot be a simple "MouseLeftButtonUp" event because if you drag your finger more than a few pixels then the event does not trigger.
In my own app I am trying to get an icon appear within a listbox that has a SelectionChanged event. The issue is that if you do not touch the small icon precisely then you are triggering the listbox event rather than the event I want to occur when pressing the image.
I think I need to wrap my image in a Canvas but then am still stuck as to what the event should be.
How do you increase the target size of the area where a user can click on your element?
What event should an image have when within a listbox (which is within a pivot) that has a SelectionChanged event? (MouseLeftButtonUp causes issues if you half drag to the next pivot and lift your finger - it triggers the MouseLeftButtonUp event)
I implemented something very similar to that behavior by making an itemtemplate where the checkbox was pushed offscreen to the left by using a negative margin.
I then created 2 visual states, one for Open and Closed. The open state set the margin to 0, bringing the checkbox back onscreen. Closed state had the negative margin.
With the fluidmove behavior, switching between states on button press was EASY. The only thing you'd have to add would be an invisible button/touch area on the left that would also trigger "opening" the checkbox column (changing state to reset the margins).
Hope that helps...
The outlook app is a native app, so it probably isn't using xaml at all.
If you're worried about the mouse events, then you should look at the gesture stuff in the silverlight toolkit, it contains tap, etc events that make a little more sense on the phone.
Increasing the target size and generally making stuff touchable: wrap it in a Button, then alter the ControlTemplate for the Button to remove the border.
If you look at the ControlTemplate for a Button, (Expression Blend, Edit Template, Edit a copy) you'll see the mechanics of the touch area. It's nothing more than padding/margin.
Thus, you can't bleed your touch region out without altering the layout and affecting other items around the control. I'd do two things:
First, I'd think about whether my whole control should be larger in the first place with good spacing around it. Is my design right?
Second, I'd cheat. I'd float a fixed sized button with no border over the area using the Translate transformation to move it around freely.
Good luck,
Luke

Resources