In Cocoa, specifically the iPhone SDK, the opaque property is described as:
If opaque, the drawing operation
assumes that the view fills its bounds
and can draw more efficiently. The
results are unpredictable if opaque
and the view doesn’t fill its bounds.
Set this property to NO if the view is
fully or partially transparent.
In my experience, if you have a view (label, table cell, etc.) with backgroundColor set to [UIColor clearColor], you do not need to set opaque to NO for it to appear properly (with a clear background).
Intuitively, doing this would require also setting opaque to NO, but I've never run into problems.
Can you mix opaque=YES and clearColor, or am I living on borrowed time? It doesn't seem to be specifically documented anywhere.
Try it and see is the only way forward on the iPhone, because like you say, despite the volume of the documentation that ships with the SDK, it's not very specific in many cases.
As for opaque though, this is just a hint to the compositing engine that tells it it doesn't need to bother to displaying any layers that are covered by the opaque layer. However, the compositing is done by the graphics chip on the phone, so in many cases it is not more efficient to not draw the obscured part of a partially obscured layer, which is most likely why you are not seeing things get messed up at the moment (i.e. cocoa is ignoring the setting in the cases you've tried). By the same token you are not seeing a performance improvement from setting opaque to true.
So my advice would be to stick with using the opaque property the way the docs say because you are risking a buggy rendering for no real benefit.
Related
As in the question title, what's the relationship between NSAppearance, NSEffectView.Material, and "vibrancy"? I've found through experimentation that, for some materials, the choice of NSAppearance can change how the material appears (e.g. NSEffectView.Material.titlebar will be light or dark depending on the active NSAppearance), while other materials (e.g. .light) don't seem to care.
I suspect that materials like .titlebar are proxies which select from .dark, .ultradark, .light, and .mediumLight depending on the NSAppearance, but then that would seem to be the role of .appearanceBased. I also see in the description for NSAppearance.Name.vibrantLight...
This should only be set on an NSVisualEffectView or one of its subviews.
...which somewhat contradicts a statement from the NSEffectView documentation...
The view’s effective appearance must allow vibrancy... in most cases you set the appearance on the window or on the visual effect view—subviews then inherit the appearance.
...suggesting that it could be correct to set vibrantLight as the NSAppearance of an entire window (if that's the look you wanted).
Finally, I'm confused as to what exactly "vibrancy" is; if someone could explain it, that would be great.
So an NSAppearance generally describes the styling of controls, colors, etc for a view hierarchy that the appearance is set against.
NSVisualEffectView provides a way to achieve two effects: translucency and vibrancy. The former is the more obvious, with the translucent sidebars or titlebars. And the documentation has a really nice description of vibrancy:
Vibrancy is associated with translucency. It describes a compositing mode that does special blending such as Plus Lighter, Plus Darker, Color Dodge, or Color Burn.
Basically describing how the content (text, images, etc) within the visual effect view are composited against the translucency.
So how do these all relate?
Material
Material describes the look of the translucency effect. As you pointed out, some are affected by NSAppearance, some are not. The ones that are semantically describe their usage so that custom UI can resemble that effect regardless of appearance (.appearanceBased, .titlebar, .menu, .popover, .sidebar, .selection) whereas others allow for specific control over the resulting translucency (.light, .dark, .mediumLight, .ultraDark) but should be used in conjunction with their associated NSAppearance so that the content within the visual effect view can match the translucency effect. Unless you need specific control over the material, using appearance-sensitive / semantic ones can result for more standard UI.
Vibrancy
So in order to get the content vibrancy effects that NSVisualEffectView can provide, it needs to be used in conjunction with a vibrant appearance: .vibrantLight or .vibrantDark. Without setting a "vibrant" appearance, NSVisualEffectView will only provide the translucency effect in the background, and the content within it will look plain and not have the special blending modes like you see in sidebars or titlebars.
I'm an engineer and we are currently porting our Red5 + Flash game into a Node.js + Easeljs html5 application.
Basicly: it's a board game, not an rpg. The layer system means we have multiple canvasses, based on functionally. For example there is a static background stage, with images. There is a layer for just the timers.
At default, all canvas size is 1920x1080, if needed we downscale to fit to the resolution.
The first approach used kinetic.js, but the performance fallen when the game got complex. Then we switched to easel, because it's abstraction level is lower, so we can decide how to implement some more function, not just use the provided robust one.
I was optimistic, but now it's starting to show slowness again, that's why I want to look deeper inside and do fine performance tuning. (Of course everything is fine in Chrome, Firefox is the problem, but the game must run smoothly on all modern browser).
The main layer (stage) is the map, contains ~30 containers, in each there is a complex custom shape, ~10 images. The containers are listening to mouse events, like mouseover, out, click. Currently, for example on mouseover I refill the shape with gradient.
Somehow, when I use cache, like the way in the tuts the performance get even worse, so I assume I'm messing up something.
I collected some advanced questions:
In the described situation when can I use cache and how? I've already tried cache on init, cacheUpdate after fill with other color or gradient, then stage.update(). No impact.
If I have a static, never changing stage cache doesn't make sense on that layer, right?
What stage.update() exactly do? Triggering the full layer redraw? The doc mentions some kind of intelligent if changed then redraw effect.
If I want to refill a custom shape with new color or gradient I have to completely redraw its graphics, not just use a setFill method, right?
In easel there is no possibility to redraw just a container for example, so how can I manage to not update the whole stage, but just the one container that changed? I thought I can achieve this with caching, cache all containers the just update the one that changed, but this way didn't work at all for me.
Does it make sense to cache bitmap images? If there are custom shapes and images in a container what is better? Cache the container or just the shape in container.
I found a strange bug, or at least an interesting clue. My canvas layers totally overlapping. On the inferior layers the mouseover listening is working well, but the click isn't on the very same container/object.
How can I produce a click event propagation to overlapped layers those have click listeners? I've tried it with simple DOM, jquery, but the event objects were far away from what canvas listeners wanted to get.
In brief, methods and props I've already played with bare success when tried tuning: cache(), updateCache(), update(), mouseEnabled, snapToPixel, clear(), autoClear, enableMouseOver, useRAF, setFPS().
Any answer, suggestion, starting point appreciated.
UPDATE:
This free board game is a strategy game, so you are facing a world map, with ~30 territories. The custom shapes are the territories and a container holds a territory shape and the icons that should be over the territory. This container overlapping is minimal.
An example mouse event is a hover effect. The player navigate over the territory shape then the shape is getting recolored, resized, etc and a bubble showing up with details about the place.
Basically, maximum amount of 1-3 container could change at once (except the init phase -> all at this time). Not just the animations and recoloring slow in FF, but the listener delay is high too.
I wrote a change handler, so I only stage.update() up on tick the modified stages and the stages where an animation is running (tweenjs).
In my first approach I put every image to the container that could be needed at least once during the game, so I only set visible flags on images (not vectors).
Regarding caching:
There are some strange caching-issues, somehow the performance can drop with certain sizes of the caching rectangle: CreateJS / EaselJS Strange Performance with certain size shapes
(2) Depending on how often you call stage.update();
(3)
Each time the update method is called, the stage will tick any
descendants exposing a tick method (ex. BitmapAnimation) and render
its entire display list to the canvas. Any parameters passed to update
will be passed on to any onTick handlers.
=> Afaik it rerenders everything if not cached
(4) Yes.
(5) No. (I don't know of any)
(6) If the content's of the container don't change often, I'd cache the whole container, otherwise the container will be reconstructed every frame.
I have a question though: Why do you use multiple canvases? How many do you use? I could imagine that using multiple canvases might slow down the game.
How many sprites do you use in total?
2: if your layer or stage doesn't change, don't call stage.update() for that layer (so it doesn't gets rerendered, gives me a much lower cpu!)
For example, keep a global "stagechanged" variable and set this to true when something has changed:
createjs.Ticker.addEventListener("tick",
function() {
if (stagechanged)
{
stagechanged = false;
stage.update();
}
});
(or do you already use this, as stated in your "update"?)
4: I found a way to update for example the fill color :)
contaier1.shape1.graphics._fillInstructions[0].params[1] = '#FFFFFF';
(use chrome debugger to look at the _fillInstructions array to see which array position contains your color)
5: I found a way to just paint one container :)
//manual draw 1 component (!)
var a = stage.canvas.getContext("2d");
a.save();
container1.updateContext(a); //set position(x,y) on context
container1.draw(a);
a.restore();
Reading https://learn.microsoft.com/en-us/windows/win32/direct2d/comparing-direct2d-and-gdi :
Presentation Model
When Windows was
first designed, there was insufficient
memory to allow every window to be
stored in its own bitmap. As a result,
GDI always rendered logically directly
to the screen, with various clipping
regions applied to ensure that it did
not render outside of its window. In
contract, Direct2D follows a model
where the application renders to a
back-buffer and the result is
atomically “flipped” when the
application is done drawing. This
allows Direct2D to handle animation
scenarios much more fluidly that GDI
can.
The author says Direct2D uses back-buffer and by 'flipped' he meant swap-chain I guess. I created a simple demo that draw a rectangle at random location on mouse click. But previous rectangles are not cleared so it seems that it is drawn directly to the screen and does not use any back-buffer.
When you initialize the RenderTarget for your Direct2D operations you can specify in the second parameter the D2D1_PRESENT_OPTIONS option.
I think what confuses you is the D2D1_PRESENT_OPTIONS_RETAIN_CONTENTS and the fact that the buffer isn't swapped but copied.
That doesn't disprove the existence of back-buffers, it only means the back-buffer isn't cleared between redraws. Right observation, wrong conclusion!
If you increase the number of back-buffers in the chain, you'll start noticing flickering rectangles as you keep clicking, so you should always clear your back-buffer between redraws.
Direct2D indeed uses back-buffer.
Perhaps you forgot to clear your render target, which is the back-buffer, right after calling begindraw and so previous draws stayed there?
I'm trying to implement a technique similar to the one in the ImageBrowserViewAppearance sample code from Apple (located here: http://developer.apple.com/library/mac/#samplecode/ImageBrowserViewAppearance/Introduction/Intro.html ), where CALayers are generated on top of the items in the IKImageBrowserView to customize the appearances of the objects in the image browser.
However, I'm getting a weird problem when I turn on garbage collection, and I can reproduce it in the Apple sample code. Simply turn on Garbage Collection in the target, and build and launch the ImageBrowserAppearance sample app. Then, add some photos to the image browser using the "Add Photos..." button.
Now, click on an empty portion of the IKImageBrowserView, and click and drag to start selecting multiple items in the browser view. As you drag the selection box around, you should notice that sometimes the pin and gloss overlay for some of the items flicker and briefly appear in the bottom-left corner of the IKImageBrowserView. All of the CALayers seem to do this occasionally, I've seen the white surrounding slide area flicker down into the bottom-left corner as well.
When I mimic the technique in my own code, I (not surprisingly) also can reproduce this badge flickering. However, this problem disappears when garbage collection is off.
Anybody have a clue what could be going wrong here? I'd like to use garbage collection in my app in conjunction with this technique, but the flickering is kind of annoying.
I bookmarked this a while back but Apple's changed the URL and the text. Fortunately I quoted it when I bookmarked it:
The Core Graphics APIs (Quartz 2D) see an approximately 25% reduction in drawing performance for applications compiled to use garbage collection.
That "25% reduction in drawing performance" text has been rewritten into a "slight overhead in code execution" and that was for 10.5. Perhaps Apple fixed it for 10.6. And you're talking Core Animation, not Core Graphics.
Still, Core Animation eventually has to talk to Core Graphics, and perhaps that performance issue hasn't gone away, and you're being bitten by it.
I fooled around with this a bit and can confirm I get the same behavior running the project with GC turned on. In fact, if you're patient enough and slowly change the selection one image at a time using the arrow keys, eventually it'll trigger the behavior and you can see the layers from one image in the view are displayed in the lower left corner instead of on top of the image. I haven't been able to find any sort of pattern as to when it happens, or any relation between which image is selected and which image has its layers missing. I'm assuming that for whatever reason, those layers are getting their frame origin set to {0, 0}, but heck if I know why.
CATransition is quite unusual. Consider the following code.
CATransition* trans=[CATransition animation];
trans.duration=0.5;
trans.type=kCATransitionFade;
[self.holdingView.layer addAnimation:trans forKey:nil];
self.loadingView.hidden=YES;
self.displayView.hidden=NO;
Notice that nowhere did I tell the transition that I wanted to display the displayView rather than loadingView, so the views must somehow access the transition themselves. Can anyone explain in more detail how this works?
When you add the transition as an animation, an implicit CATransaction is begun. From that point on, all modifications to layer properties are going to be animated rather than immediately applied. The way the CATransition performs this animation to to take a snapshot of the view before the layer properties are changed, and a snapshot of what the view will look like after the layer properties are changed. It then uses a filter (on Mac this is Core Image, but on iPhone I'm guessing it's just hard-coded math) to iterate between those two images over time.
This is a key feature of Core Animation. Your draw logic doesn't generally need to deal with the animation. You're given a graphics context, you draw into it, you're done. The system handles compositing that with other images over time (or rotating it in space, or whatever). So in the case of changing the hidden state, the initial-state fully composited image is blended with the final-state composted image. Very fast on a GPU, and it doesn't really matter what change you made to the view.