How does CATransition work? - cocoa

CATransition is quite unusual. Consider the following code.
CATransition* trans=[CATransition animation];
trans.duration=0.5;
trans.type=kCATransitionFade;
[self.holdingView.layer addAnimation:trans forKey:nil];
self.loadingView.hidden=YES;
self.displayView.hidden=NO;
Notice that nowhere did I tell the transition that I wanted to display the displayView rather than loadingView, so the views must somehow access the transition themselves. Can anyone explain in more detail how this works?

When you add the transition as an animation, an implicit CATransaction is begun. From that point on, all modifications to layer properties are going to be animated rather than immediately applied. The way the CATransition performs this animation to to take a snapshot of the view before the layer properties are changed, and a snapshot of what the view will look like after the layer properties are changed. It then uses a filter (on Mac this is Core Image, but on iPhone I'm guessing it's just hard-coded math) to iterate between those two images over time.
This is a key feature of Core Animation. Your draw logic doesn't generally need to deal with the animation. You're given a graphics context, you draw into it, you're done. The system handles compositing that with other images over time (or rotating it in space, or whatever). So in the case of changing the hidden state, the initial-state fully composited image is blended with the final-state composted image. Very fast on a GPU, and it doesn't really matter what change you made to the view.

Related

JavaFX DragView Image has lowered opacity compared to actual image... Can that be changed?

I have been working on using Drag and Drop with JavaFX, and one thing I notice is that items are given lowered opacity compared to their actual images, as well as the larger the image the made "Faded/transparent" the image gets. I am working on am application that can drag fairly large items which end up almost invisible when dragging, which defeats the purpose of using a DragView Image in the first place, and makes this unusable for larger items (the way I'm using the DragView I want the images to be a certain size compared to the scene in which I'm dragging onto).
My application essentially has 2 Windows (Stages) where the data is in one Stage and I get the item from a list and drag it onto the scene. The DragView image is a representation of the object going onto the scene, so when I drop the DragView Image, it gets dropped exactly into place on the scene, so the DragView is important to my overall application.
Just a NOTE: There is also backing data in my transfer in order to recreate the box, as well as additional data that gets transferred with the Drag and Drop.
I tried looking at the FX DragView Internal code but didn't find anything that sets the Opacity or anything like that, so one of my assumptions of "it's built into the OS" is something I keep thinking about, as lowered opacity is something I remember being built in, and when trying it it does happen, but very very slight, and not as bad as what I'm experiencing (Also possible that since the icons are much smaller they don't run into the "larger image opacity issue."
I am running Windows 7 64-Bit for those who are wondering.
My question is, is it possible to change the opacity settings of the Drag and Drop ImageView either via JavaFX or possibly Native with using something along the lines of JNI?
I don't have any example code at the moment, but can add if someone is interested, but I'm sure for those who know about Drag and Drop ImageViews should already know about the opacity.
Thank you all.

EaselJS and multi layered canvas system: performance tuning, game developing, event handling

I'm an engineer and we are currently porting our Red5 + Flash game into a Node.js + Easeljs html5 application.
Basicly: it's a board game, not an rpg. The layer system means we have multiple canvasses, based on functionally. For example there is a static background stage, with images. There is a layer for just the timers.
At default, all canvas size is 1920x1080, if needed we downscale to fit to the resolution.
The first approach used kinetic.js, but the performance fallen when the game got complex. Then we switched to easel, because it's abstraction level is lower, so we can decide how to implement some more function, not just use the provided robust one.
I was optimistic, but now it's starting to show slowness again, that's why I want to look deeper inside and do fine performance tuning. (Of course everything is fine in Chrome, Firefox is the problem, but the game must run smoothly on all modern browser).
The main layer (stage) is the map, contains ~30 containers, in each there is a complex custom shape, ~10 images. The containers are listening to mouse events, like mouseover, out, click. Currently, for example on mouseover I refill the shape with gradient.
Somehow, when I use cache, like the way in the tuts the performance get even worse, so I assume I'm messing up something.
I collected some advanced questions:
In the described situation when can I use cache and how? I've already tried cache on init, cacheUpdate after fill with other color or gradient, then stage.update(). No impact.
If I have a static, never changing stage cache doesn't make sense on that layer, right?
What stage.update() exactly do? Triggering the full layer redraw? The doc mentions some kind of intelligent if changed then redraw effect.
If I want to refill a custom shape with new color or gradient I have to completely redraw its graphics, not just use a setFill method, right?
In easel there is no possibility to redraw just a container for example, so how can I manage to not update the whole stage, but just the one container that changed? I thought I can achieve this with caching, cache all containers the just update the one that changed, but this way didn't work at all for me.
Does it make sense to cache bitmap images? If there are custom shapes and images in a container what is better? Cache the container or just the shape in container.
I found a strange bug, or at least an interesting clue. My canvas layers totally overlapping. On the inferior layers the mouseover listening is working well, but the click isn't on the very same container/object.
How can I produce a click event propagation to overlapped layers those have click listeners? I've tried it with simple DOM, jquery, but the event objects were far away from what canvas listeners wanted to get.
In brief, methods and props I've already played with bare success when tried tuning: cache(), updateCache(), update(), mouseEnabled, snapToPixel, clear(), autoClear, enableMouseOver, useRAF, setFPS().
Any answer, suggestion, starting point appreciated.
UPDATE:
This free board game is a strategy game, so you are facing a world map, with ~30 territories. The custom shapes are the territories and a container holds a territory shape and the icons that should be over the territory. This container overlapping is minimal.
An example mouse event is a hover effect. The player navigate over the territory shape then the shape is getting recolored, resized, etc and a bubble showing up with details about the place.
Basically, maximum amount of 1-3 container could change at once (except the init phase -> all at this time). Not just the animations and recoloring slow in FF, but the listener delay is high too.
I wrote a change handler, so I only stage.update() up on tick the modified stages and the stages where an animation is running (tweenjs).
In my first approach I put every image to the container that could be needed at least once during the game, so I only set visible flags on images (not vectors).
Regarding caching:
There are some strange caching-issues, somehow the performance can drop with certain sizes of the caching rectangle: CreateJS / EaselJS Strange Performance with certain size shapes
(2) Depending on how often you call stage.update();
(3)
Each time the update method is called, the stage will tick any
descendants exposing a tick method (ex. BitmapAnimation) and render
its entire display list to the canvas. Any parameters passed to update
will be passed on to any onTick handlers.
=> Afaik it rerenders everything if not cached
(4) Yes.
(5) No. (I don't know of any)
(6) If the content's of the container don't change often, I'd cache the whole container, otherwise the container will be reconstructed every frame.
I have a question though: Why do you use multiple canvases? How many do you use? I could imagine that using multiple canvases might slow down the game.
How many sprites do you use in total?
2: if your layer or stage doesn't change, don't call stage.update() for that layer (so it doesn't gets rerendered, gives me a much lower cpu!)
For example, keep a global "stagechanged" variable and set this to true when something has changed:
createjs.Ticker.addEventListener("tick",
function() {
if (stagechanged)
{
stagechanged = false;
stage.update();
}
});
(or do you already use this, as stated in your "update"?)
4: I found a way to update for example the fill color :)
contaier1.shape1.graphics._fillInstructions[0].params[1] = '#FFFFFF';
(use chrome debugger to look at the _fillInstructions array to see which array position contains your color)
5: I found a way to just paint one container :)
//manual draw 1 component (!)
var a = stage.canvas.getContext("2d");
a.save();
container1.updateContext(a); //set position(x,y) on context
container1.draw(a);
a.restore();

how to translate and scale a NSImage?

I have built so far an application that allows the user to drag and drop images onto a NSImageView. However, I want to be able to move these images by simply clicking on any image and hold down the mouse button to move it's location.
How can I manipulate NSImageView to translate/scale after setting the images down? Is that possible? I've read about the NSAffineTransform, but it seems like that is moving the images before creating the image itself. I already have the images on the canvas, and simply want to click and hold the image and move it with my mouse. Please help anyone!
There are two sides to this.
NSImage is the model object, which you might want to display in different ways, save to disk/archive, etc. If you want to actually change the model (scaling, rotating, etc.), implying a permanent change, then you are going to probably want to look at NSAffineTransform, Quartz drawing, etc.
But you probably didn't mean that. Instead you probably are interested in NSImageView, which is a view object, displaying the contents of the NSImage model object using whatever display attributes are desired. If you only want to change how an image is displayed, not what the actual bytes in the image are, then you are going to manipulate the NSImageView at run-time. You can use NSAffineTransform here as well, but it's somewhat uncommon (and usually unnecessary).
The key thing to note that is the NSImageView inherits from NSView, so you have all its power at your disposal. Take a look at certain methods, such as:
-setFrameSize: - useful for changing the view size, and thus the image display scale
-setFrameOrigin: - useful for changing the view position, and thus the apparent image position
Note again that these have nothing to do with images per se, and apply to all Cocoa views. You may want to take a look at a book like Cocoa Programming for Mac OS X to get you past the basics. (You can then do more interesting things, like rotation, animation, etc.)

"CoreAnimation: surface is too large"

I'm creating a custom (layer-hosting) document view, which is contained within a scroll view. The root layer has two sub layers of the same size--one for the view's content, and one for anything that needs to hover over the main content. I set the frame to 2500x2500 and added a number of cells to the content layer, which was fine. On adding a translucent clone of one of the cell's layers to the overlay layer, the whole view clears briefly, and I get a log message 'core animation: surface 2502x2502 is too large'. This happens between adding the new layer and the next cycle of the event loop, so I guess when core animation renders the new layer.
I knew that a layer's content size is related to opengl texture size, but didn't think its frame mattered. I'm not drawing anything to these layers, not setting any style properties, and remove offscreen sub layers. All I'm really using them for is to handle the geometry of the document view. Is this an appropriate use of CA layers? If not, are there better ways of handling a large core animation-based document view?
Edit:
I've had this problem again, caused by an implicit animation on adding sublayers to the large parent. So in addition to what is suggested below, that's one to check if you run into this.
I would check to make sure that you're not setting any properties on your 2500x2500 layers which could require offscreen rendering. (This causes the layer to try and create a full-size buffer off-screen and render its contents into that buffer, rather than just rendering the contents to the screen directly.)
For example, setting an opacity, masksToBounds, mask, shouldRasterize, etc, could cause offscreen-rendering. You can see if offscreen-rendering is happening with the Core Animation instrument. (There's a checkbox to highlight offscreen-rendered areas.)

NSView leaves artifacts on another NSView when the first is moved across the second

I have an NSView subclass that can be dragged around in its superview. I move the views by calling NSView's setFrameOrigin and setFrameRotation methods in my mouseDragged event handler. The views are both moved and rotated with each call.
I have multiple instances of these views contained by a single superview. The problem I'm having is that, as one view is dragged over another, it leaves artifacts behind on the view it's eclipsing. I recorded a short video of this in action. Unfortunately, due to the video compression the artifacts aren't very visible.
I strongly suspect that this is related to the simultaneous translation and rotation. Quartz Debug reveals that a rectangle of the occluding (or occluded) view is updated as another view is dragged across it (video here); somehow this rectangle is getting miscalculated by the drawing engine, so part of the view that should be redrawn isn't.
The kicker is I have no idea how to fix this. I can't find any way to manually specify the update rect in the docs, nor am I sure that's what needs to happen. Any ideas? Thanks!
You might also consider using CALayers instead of views. Unlike views, layers are intended to be stacked with their siblings.
For a possible least-effort solution, try making the views layer-backed; it may or may not solve this problem, but it's worth a try.
Views aren't really designed to be stacked in an interactive fashion. Can be done, but edge cases abound.
Generally, for this kind of thing you would use a Cell like infrastructure if you want to do in-view dragging (See the Sketch example) and you would use the drag-n-drop infrastructure if you want to drag between views or windows (or apps).
If you really want to drag a transformed view over the top, you'll need to invalidate a rectangle of the view underneath the view being dragged. The rectangle will need to be bigger by a few pixels than the total area (unrotated/untransformed) that is obscured by the view being dragged. The artifacts are, effectively, caused by rounding error; diagonal lines are just an estimate on a raster drawing system.
See the method:
- (void)setNeedsDisplayInRect:(NSRect)invalidRect;

Resources