KineticJS seems to have an issue with handling clicks on background layers after redrawing the stage.
I have a jsfiddle with a minimal example of this problem. http://jsfiddle.net/Z2SJS/
On line 34 I have:
stage.draw()
If this is commented out, events fire as they should. When this is present, after dragging the click events to the background will stop firing.
I know that in this example I am not doing anything that would require me to redraw the stage, but in my project I am using the dragstart and dragmove events to manipulate objects on multiple layers, and I then lose reference to my background clicks.
Is there something I need to do to ensure that redrawing the stage does not cause my events to stop firing?
Instead of using stage.draw() use foreground.draw()
here is the updated fiddle
Alternately: set dragOnTop: false inside the circle instantiation. Fiddle2
Related
I have two canvases and two stages in CreateJS / EaselJS. The first stage has autoClear set to false and I am doing dynamic drawing on it starting with a stagemousedown event. The second stage uses nextStage to send mouse events to the first stage. The second stage has interface such as a Bitmap that I want to press on to go to another page. When I click on the Bitmap, the stage beneath does the dynamic drawing. I want the click on the Bitmap not to go through to the first stage but stopImmediatePropagation does not work, nor does putting a clone of the Bitmap with mouseEnabled false on it underneath. I can just use mousedown on the Bitmap so the user does not notice as much, but was wondering if there is a way to disable mouse events from passing through the top stage if they are acting on an object with an event set to capture? Thanks in advance.
The stagemousedown and other stage events are decoupled from the EaselJS object event model. They are catch-all events, which basically represent mouse interaction with the . Because of this, catching and stopping these events won't interrupt the display list hierarchy.
Typically if you want to block other events in the same stage, you can create a canvas-size box (Shape, etc) that will block the interaction. When dealing with nextStage, this is especially true, since we are passing on events that are unhandled by objects in the EaselJS display list.
A better approach might be to toggle the nextStage on stagemousedown, so it is null during the click event. Not sure if this will work, but its a start.
How's the page scrolling created on flashvhtml.com? How is the scrolling triggered by the links at the top and how the other 'sub animation' events tied in to the scrolling background?
This is what I have been able to gather:
Listeners for interactions are added in the Trackpad.js file. Listeners for events such as mouse dragging, keyboard events, touch etc. All of which calculating a value variable.
This value variable of the Trackpad.js is then
used to adjust camera position in the update method of main.js
file.
There are 3 main views that are being rendered if I understand
it correctly: ScrollView, ScaleView, RocketView. All of those
initiated inside the init method of main.js. But they are all
defined in the fvh.js file.
Each of these three views have an
updatePosition method taking camera.y or mainScrollPosition as
parameter. These updatePosition methods are called inside of the
same update method of main.js file.
Then there is a ScrollMap.js which contains loads of position data for all 3 views e.g. it contains ScrollView data in the format of:
mcxxx:{view:'nameofelement',depth:xx,startFrame:xxx,endFrame:xxx,position:[xxx,...]} etc.
Also there is a sectionLandPositions variable defined in the main.js file which is also very interesting because this is what is then used inside the onMenuItemPressed method in the same file to tween and bring a certain section into view.
So magic basically happens in the updatePosition methods of each views and how the value is computed in the Trackpad.js. And this is where I leave you to debug further and take it home. :)
Files under scrutiny are: Trackpad.js, fvh.js, ScrollMap.js, main.js. Hope you find it all useful.
P.S. Kudos to Waste-Creative for creating this informative and engaging website.
T
An easy way to do this, is by adding tweens and then using the scroll/drag input and links to move around in the tween's timeline.
Tween one : Pans Camera down slowly continuously.
Tween two : Wait's until x, fades sprite in untily y, fades sprite out, ..
Make sure, you dont 'play' the tween's after creating them, but adjust the time manually (based on the scroll position).
There are a few tweeing frameworks you can use for pixi: Greensocks, Impact, tween.js
And there's a discusion on it over at the html5gamedevs forum.
I've run in a bit of a pickle with my puzzle game for windows phone.
I want to change between two adjutant rectangles, both on the same grid.
The tap event was easily implemented, but implementing drag seems to be a really big pain.
I'm also using a custom user control to get the rectangles on the grid, so i need to create custom delegates before attaching events to my rectangle matrix.
I am currently using the manipulation completed and manipulation started events to implement the drag gesture, but there are a couple of problems:
1) i have to tell the difference between tap and actual drag, both which are covered by the manipulation completed event. This is the way I do it right now:
if (e.TotalManipulation.Translation.X == 0 && e.TotalManipulation.Translation.Y == 0)
{
}
else
{do drag stuff here}
however, the do drag stuff here part does not seem to work, even if the transitions are different from 0; It always executes the tap event.
I am currently stacked in using manipulation events, because, as i said, I am using a custom control as an object prototype for my rectangle matrix, and i need custom delegates for that, and apparently, the GestureListener has no constructors for its event classes.
So, any suggestion on how to do this?
I figured the answer just after posting this question.
You can actually attach a gesture listener to a custom control and create custom delegates, by sending the drag gesture event parameter from the gesture listener drag event to the delegate you create and it works.
I have a custom view that is a subview of the main window. I have a timer that fires a [self setNeedsDisplay:TRUE] that will update drawing on the view. But from what I can see, if I leave the application on the background and switch to another, it does not reflect the new drawing functions until I click again on the application.
What could I be missing?
Thank you,
Jose.
setNeedsDisplay will not fire if it thinks the user cant see what its drawing for obvious reason. This includes, offscreen and obscured views even when the app is running.
In an app where you might be drawing data over time you would draw all the relevant data while backgrounded out at once when the app resumes.
I have some NSViews that I'm putting in one of two layouts depending on the size of my window.
I'm adjusting the layout when the relevant superview receives the resizeSubviewsWithOldSize method.
This works, but I'd like to animate the change. So naturally I tried calling the animator proxy when I set the new frames, but the animation won't run while the user is still dragging. If I release the mouse before the animation is scheduled to be done I can see the tail end of the animation, but nothing until then. I tried making sure kCATransactionDisableActions was set to NO, but that didn't help.
Is it possible to start a new animation and actually have it run during the resize?
I don't think you can do this easily because CA's animations are run via a timer and the timer won't fire during the runloop modes that are active while the user is dragging.
If you can control the runloop as the user is dragging, play around with the runloop modes. That'll make it work. I don't think you can change it on the CA side.
This really isn't an answer, but I would advise against animating anything while dragging to resize a window. The screen is already animating (from the window moving) - further animations are likely going to be visually confusing and extraneous.
CoreAnimation effects are best used to move from one known state to another - for example, when a preference window is resizing to accompany a new pane's contents, and you know both the old and new sizes, or when you are fading an object in or out (or both). Doing animation while the window is resizing is going to be visually confusing and make it harder for the user to focus on getting the size of the window where they want it to be.