Any ideas why a screen saver using just a plain ScreenSaverView subclass with a CAEmitterLayer sublayer would render fine on the primary screen and choppy (as if every 2. frame renders there..) on the secondary screen..?
This is my initialization code:
- (id)initWithFrame:(NSRect)frame isPreview:(BOOL)isPreview
{
self = [super initWithFrame:frame isPreview:isPreview];
if (self)
{
CAEmitterLayer* emitterLayer = [MyEmitterFactory emitterLayer:self];
[self setWantsLayer:YES];
[self.layer addSublayer:emitterLayer];
[self setAnimationTimeInterval:1/2.0];
}
return self;
}
Everything else in this subclass is default (as provided by the Xcode template).
Funny enough, backingStoreType does sound like a good candidate to tweak in a ScreenSaverView subclass using CoreAnimation, alas all other modes except the default one are not to be used as per the docs..
(As the animation is powered by Core Animation it doesn't really matter what I put in setAnimationTimeInterval - or remove the call completely, as experiments have shown)
According to the documentation of NSView setWantsLayer:
To create a layer-hosting view, you must call setLayer: and supply your layer object before you call the setWantsLayer: method; the order of these method calls is crucial.
Furthermore: Which OS version is it? Does the choppiness also come up when the two displays are mirrored (or vice versa)?
Related
The upcoming OSX 10.10 ("Yosemite") offers a new type of view, NSVisualEffectView, which supports through-the-window or within-the-window translucency. I'm mostly interested in through-the-window translucency, so I'm going to focus on that in this question, but it applies to within-the-window translucency as well.
Using through-the-window translucency in 10.10 is trivial. You just place an NSVisualEffectView somewhere in your view hierarchy and set it's blendingMode to NSVisualEffectBlendingModeBehindWindow. That's all it takes.
Under 10.10 you can define NSVisualEffectViews in IB, set their blending mode property, and you're off and running.
However, if you want to be backwards-compatible with earlier OSX versions, you can't do that. If you try to include an NSVisualEffectView in your XIB, you'll crash as soon as you try to load the XIB.
I want a "set it and forget it" solution that will offer translucency when run under 10.10 and simply degrade to an opaque view when run on earlier OS versions.
What I've done so far is to make the view in question a normal NSView in the XIB, and then add code (called by awakeFromNib) that checks for [NSVisualEffectView class] != nil, and when it's the class is defined, I create an instance of the NSVisualEffectView, move all my current view's subviews to the new view, and install it in place. This works, but it's custom code that I have to write every time I want a translucent view.
I'm thinking this might be possible using an NSProxy object. Here's what I'm thinking:
Define a custom subclass of NSView (let's call it MyTranslucentView). In all the init methods (initWithFrame and initWithCoder) I would throw away the newly created object and instead create a subclass of NSProxy that has a private instance variable (myActualView). At init time it would decide to create the myActualView object as an NSVisualEffectView if OS>=10.10, and a normal NSView under OS<10.10.
The proxy would forward ALL messages to it's myActualView.
This would be a fair amount of fussy, low-level code, but I think it should work.
Has anybody done something like this? If so, can you point me in the right direction or give me any pointers?
Apple is MUCH more open with the Beta agreement with Yosemite a than with previous Betas. I don't think I'm violating my Beta NDA by talking about this in general terms, but actual code using NSVisualEffectView would probably need to be shared under NDA...
There is a really simple, but somewhat hacky solution: Just dynamically create a class named NSVisualEffectView when your app starts. Then you can load nibs containing the class, with graceful fallback on OS X 10.9 and earlier.
Here's an extract of my app delegate to illustrate the idea:
AppDelegate.m
#import "AppDelegate.h"
#import <objc/runtime.h>
#implementation PGEApplicationDelegate
-(void)applicationWillFinishLaunching:(NSNotification *)notification {
if (![NSVisualEffectView class]) {
Class NSVisualEffectViewClass = objc_allocateClassPair([NSView class], "NSVisualEffectView", 0);
objc_registerClassPair(NSVisualEffectViewClass);
}
}
#end
You have to compile this against the OS X 10.10 SDK.
How does it work?
When your app runs on 10.9 and earlier, [NSVisualEffectView class] will be NULL. In that case, the following two lines create a subclass of NSView with no methods and no ivars, with the name NSVisualEffectView.
So when AppKit now unarchives a NSVisualEffectView from a nib file, it will use your newly created class. That subclass will behave identically to an NSView.
But why doesn't everything go up in flames?
When the view is unarchived from the nib file, it uses NSKeyedArchiver. The nice thing about it is that it simply ignores additional keys that correspond to properties / ivars of NSVisualEffectView.
Anything else I need to be careful about?
Before you access any properties of NSVisualEffectView in code (eg material), make sure that the class responds to the selector ([view respondsToSelector:#selector(setMaterial:)])
[[NSVisualEffectView alloc] initWithFrame:] still wont work because the class name is resolved at compile time. Either use [[NSClassFromString(#"NSVisualEffectView") alloc] initWithFrame:], or just allocate an NSView if [NSVisualEffectView class] is NULL.
I just use this category on my top-level view.
If NSVisualEffects view is available, then it inserts a vibrancy view at the back and everything just works.
The only thing to watch out for is that you have an extra subview, so if you're changing views around later, you'll have to take that into account.
#implementation NSView (HS)
-(instancetype)insertVibrancyViewBlendingMode:(NSVisualEffectBlendingMode)mode
{
Class vibrantClass=NSClassFromString(#"NSVisualEffectView");
if (vibrantClass)
{
NSVisualEffectView *vibrant=[[vibrantClass alloc] initWithFrame:self.bounds];
[vibrant setAutoresizingMask:NSViewWidthSizable|NSViewHeightSizable];
[vibrant setBlendingMode:mode];
[self addSubview:vibrant positioned:NSWindowBelow relativeTo:nil];
return vibrant;
}
return nil;
}
#end
I wound up with a variation of #Confused Vorlon's, but moving the child views to the visual effect view, like so:
#implementation NSView (Vibrancy)
- (instancetype) insertVibrancyView
{
Class vibrantClass = NSClassFromString( #"NSVisualEffectView" );
if( vibrantClass ) {
NSVisualEffectView* vibrant = [[vibrantClass alloc] initWithFrame:self.bounds];
[vibrant setAutoresizingMask:NSViewWidthSizable | NSViewHeightSizable];
NSArray* mySubviews = [self.subviews copy];
for( NSView* aView in mySubviews ) {
[aView removeFromSuperview];
[vibrant addSubview:aView];
}
[self addSubview:vibrant];
return vibrant;
}
return nil;
}
#end
I've made an NSWindow in Interface Builder. Inside this window is an NSScrollView and inside that is a custom NSView.
The NSScrollview fills the NSWindow and the custom NSView fills the NSScrollview.
When the custom NSView is sent the awakeFromNib method its bounds are 0,0 and 256x373 as I'd expect, filling the scrollview.
However later I change the size of the NSView to be larger than 373high but it never changes size in the scrollview.
I've tried setting the frame, I've tried setting the bounds, but nothing makes it change.
Except, when I tried changing the intrinsicSize of the custom NSView it did change, but it made the NSWindow and NSScrollview change sizes as well to fit the new size of 256x1452
Can anyone tell me where I might be going wrong?
Is it something to do with the constraints set on the Scrollview or the NSView? I haven't set any but when I added the items in Interface Builder they were automatically added for me
[EDIT]
I've changed it so that the custom NSView is created programmatically and added to the NSScrollView with setDocumentView: and everything works as I expect. So I guess technically I've solved the problem, but I'd still like an explanation on why it's not working via Interface Builder if anyone knows.
I have a partial solution, which also causes me to pose an additional question. I had a similar issue, I needed to programatically change the size of a view embedded in a NSScrolView.
This code works, need both methods
-(void)markViewSizeChanged /* Works correctly */
{
[self setFrameSize:currentViewSize];
[self setBoundsSize:currentViewSize];
[self setNeedsDisplay:YES];
}
-(NSSize)intrinsicContentSize // Override of class method
{
return currentViewSize;
}
Note: MUST set currentViewSize in awakeFromNib
Now for the curious part. If I reverse the order of the two calls setting the frame and bounds, the size of the embedded view is correct, but the scaling factor of objects drawn is off.
-(void)markViewSizeChanged /* DOES NOT work correctly, scaling in drawing off */
{
[self setBoundsSize:currentViewSize];
[self setFrameSize:currentViewSize];
[self setNeedsDisplay:YES];
}
Layer-hosting NSViews (so NSViews that you supply a CALayer instance for and set it with setLayer:) can obviously contain subviews. Why obviously? Because in Apple's own Cocoa Slides sample code project, you can check a checkbox that switches the AssetCollectionView from being layer-backed to being layer-hosting:
- (void)setUsesQuartzCompositionBackground:(BOOL)flag {
if (usesQuartzCompositionBackground != flag) {
usesQuartzCompositionBackground = flag;
/* We can display a Quartz Composition in a layer-backed view tree by
substituting our own QCCompositionLayer in place of the default automanaged
layer that AppKit would otherwise create for the view. Eventually, hosting of
QCViews in a layer-backed view subtree may be made more automatic, rendering
this unnecessary. To minimize visual glitches during the transition,
temporarily suspend window updates during the switch, and toggle layer-backed
view rendering temporarily off and back on again while we prepare and set the
layer.
*/
[[self window] disableScreenUpdatesUntilFlush];
[self setWantsLayer:NO];
if (usesQuartzCompositionBackground) {
QCCompositionLayer *qcLayer = [QCCompositionLayer compositionLayerWithFile:[[NSBundle mainBundle] pathForResource:#"Cells" ofType:#"qtz"]];
[self setLayer:qcLayer];
} else {
[self setLayer:nil]; // Discard the QCCompositionLayer we were using, and let AppKit automatically create self's backing layer instead.
}
[self setWantsLayer:YES];
}
}
In the same AssetCollectionView class, subviews are added for each image that should be displayed:
- (AssetCollectionViewNode *)insertNodeForAssetAtIndex:(NSUInteger)index {
Asset *asset = [[[self assetCollection] assets] objectAtIndex:index];
AssetCollectionViewNode *node = [[AssetCollectionViewNode alloc] init];
[node setAsset:asset];
[[self animator] addSubview:[node rootView]];
[nodes addObject:node];
return [node autorelease];
}
When I build and run the app and play around with it, everything seems to be fine.
However, in Apple's NSView Class Reference for the setWantsLayer: method it reads:
When using a layer-hosting view you should not rely on the view for
drawing, nor should you add subviews to the layer-hosting view.
What is true? Is the sample code incorrect and it's just a coincidence that it works? Or is the documentation false (which I doubt)? Or is it OK because the subviews are added through the animator proxy?
When AppKit is "layer hosting" we assume you may (or may not) have a whole subtree of layers that AppKit doesn't know about.
If you add a subview to the layer hosted view, then it might not come out in the right sibling order that you want. Plus, we sometimes add and remove them, so it might change depending on when you call setLayer:, setWantsLayer: or when the view is added or removed from the superview. On Lion (and before) we remove the layers that we "own" (ie: layer backed) when the view is removed from the window (or superview).
It is okay to add subviews...their children-sibling-order in the sublayers array just might not be deterministic if you have sibling-layers that aren't NSViews.
I don't know what's the "right" answer to this. But I do think that the CocoaSlides example works within the boundaries of what the docs say you "shouldn't" do. In the example, look at where the insertNodeForAssetAtIndex: method is called, and you'll see that it only happens when the view is being populated, before it ever is assigned a layer or has setWantsLayer: called on it.
The docs don't say that a layer-hosted view can't contain any subviews, they just say that you can't add and subviews to one. At the point in time when those subviews are added, the main view hasn't yet become a layer-hosting view. After it has been turned into a layer-hosting view by having a manually created layer assigned to it, no more subviews are added.
So there's really no contradiction between the docs and this particular example. That being said, it could be interesting to explore this further, maybe by switching on the QC background layer right from the start, e.g. by sticking a [self setUsesQuartzCompositionBackground:YES]; right inside initWithFrame:.
SPOLIER ALERT:
It seems to work just fine. The creation of the display is a bit slower (not surprising with all that QC animation going on), but apart from that it's smooth sailing.
One comment about this code from Apple: it's busted.
When you first start the app up, note the nice gradient background. Turn QC on, then off.
Poof, no more gradient background.
I'm starting to work on a 3D particle system editor and evolver. Ive done something similar in the past with OpenGL but this time I'm making a mac os x cocoa application. I just have a few questions regarding some code I keep running into on setting up OpenGL.
1) Why do I see a lot of people on the web using...
[self setNeedsDisplay:YES];
Is this the proper way to get OpenGL to render, I now understand it leads to drawRect being called, but is it the correct way?
2) Is drawRect the proper method I should be overriding for my render frame method?
Heres the code that I continue to run into on the web:
-(void) prepareOpenGL {
[[self window] makeFirstResponder:self];
glClearColor(1.0f, 1.0f, 0.0f, 10.f);
NSTimer *timer = [NSTimer timerWithTimeInterval:1.0/60.0 target:self selector:#selector(idle:) userInfo:nil repeats:YES];
[[NSRunLoop currentRunLoop] addTimer:timer forMode:NSDefaultRunLoopMode];
}
-(void)idle:(NSTimer *)timer {
if(![[NSApplication sharedApplication] isHidden])
[self setNeedsDisplay:YES];
}
-(void) drawRect:(NSRect)dirtyRect {
glClear(GL_COLOR_BUFFER_BIT);
}
You haven't indicated whether you will be drawing your OpenGL content within an NSOpenGLView or a CAOpenGLLayer. These two have slightly different ways of updating their content for display to the screen.
For an NSOpenGLView, you don't need to update the view within it's -drawRect: method. In fact, I think you won't want to trigger -setNeedsDisplay: to do a refresh of the NSView because of some overhead that might incur. In one of my applications, I use a CVDisplayLink to trigger updates at 60 FPS within my own custom rendering methods in an NSOpenGLView. None of these touch -drawRect:. Frames are presented to the screen upon calling [[self openGLContext] flushBuffer], not by forcing a redraw of the NSView.
CAOpenGLLayers are a little different, in that you override - drawInCGLContext:pixelFormat:forLayerTime:displayTime: with your custom rendering code. This method is triggered in response to a manual -setNeedsDisplay or by the CAOpenGLLayer itself if its asynchronous property is set to YES. It knows when it's ready to present new content by the boolean value you provide in response to -canDrawInCGLContext:pixelFormat:forLayerTime:displayTime:.
I've used both of these, and each has its advantages. CAOpenGLLayers make it much easier to overlay other UI elements on your OpenGL rendering, but their rendering methods can be difficult to get to work correctly from a background thread. NSOpenGLViews can be updated easily on a background thread using a CVDisplayLink, but are a bear to overlay content on.
I have a UI where the content of an NSCollectionViewItem's View is drawn programmatically through CALayers. I am using a CAConstraintLayoutManager to keep the layot of the sublayers consistent when resizing, but I am getting very poor performance when doing so. It seems that resizing the window, which causes the resize of two CATextLayers so that they fit the root layer's width, and the repositioning of one CATextLayer so that it stays right-aligned, is causing the application to spend most of its time executing the CGSScanConvolveAndIntegrateRGB function (I have used the Time Profiler instrument).
The most "expensive" layer (the one that causes the most stuttering even if it's the only one displayed) is a wrapped multiline CATextLayer. I have absolutely no idea how to get better performance (I have tried not using a CAConstraintLayoutManager and going with layer alignments but I'm getting the same thing). Has anyone had this problem? Is there a way around it?
PS: I have subclassed the layout manager and disabled all the animations during the execution of - (void)layoutSublayersOfLayer:(CALayer *)layer by setting YES to kCATransactionDisableActions in the CATransaction but it doesn't seems to help.
Edit: I have disabled Font Smoothing for the Text Layers and performance has increased a little bit (very little), but it spends an awful amount of time in _ZL9view_drawP7_CAViewdPK11CVTimeStampb (which is something that gets called by a thread of the ATI Radeon driver, I suppose).
I solved it. Kind of. It still seems like a dirty hack to me, but I couldn't find out how to make setNeedsDisplayInRect work so I ended up doing it like this:
In the NSWindow delegate:
-(void)windowWillStartLiveResize:(NSNotification *)notification
{
[[NSNotificationCenter defaultCenter] postNotificationName:#"beginResize" object:nil];
}
-(void)windowDidEndLiveResize:(NSNotification *)notification
{
[[NSNotificationCenter defaultCenter] postNotificationName:#"endResize" object:nil];
}
In my Custom View those two notifications call, respectively, the -(void)beginResize and -(void)endResize selectors. The first one sets a BOOL inLiveResize variable to YES, while the second one sets it to NO and calls setFrameSize again with the new frame size.
I overrode (overridden? Not native english speaker, sorry) the -(void)setFrameSize:(NSSize)newSize method like this:
-(void)setFrameSize:(NSSize)newSize
{
if (inLiveResize) {
NSRect scrollFrame = [[[self superview] enclosingScrollView] documentVisibleRect];
BOOL condition1 = (self.frame.origin.y > (scrollFrame.origin.y - self.frame.size.height));
BOOL condition2 = (self.frame.origin.y < (scrollFrame.origin.y + scrollFrame.size.height + self.frame.size.height));
if (condition1 && condition2)
[super setFrameSize:newSize];
}
else {
[super setFrameSize:newSize]; }}
That's it. This way, only the visible views resize live with the window, while the others get redrawn at the end of the operation. It works, but I don't like how 'dirty' it is, I'm sure there is a more elegant, built-in(ish) way to do this by using the setNeedsDisplayInRect method. I will research more.