I'm starting to work on a 3D particle system editor and evolver. Ive done something similar in the past with OpenGL but this time I'm making a mac os x cocoa application. I just have a few questions regarding some code I keep running into on setting up OpenGL.
1) Why do I see a lot of people on the web using...
[self setNeedsDisplay:YES];
Is this the proper way to get OpenGL to render, I now understand it leads to drawRect being called, but is it the correct way?
2) Is drawRect the proper method I should be overriding for my render frame method?
Heres the code that I continue to run into on the web:
-(void) prepareOpenGL {
[[self window] makeFirstResponder:self];
glClearColor(1.0f, 1.0f, 0.0f, 10.f);
NSTimer *timer = [NSTimer timerWithTimeInterval:1.0/60.0 target:self selector:#selector(idle:) userInfo:nil repeats:YES];
[[NSRunLoop currentRunLoop] addTimer:timer forMode:NSDefaultRunLoopMode];
}
-(void)idle:(NSTimer *)timer {
if(![[NSApplication sharedApplication] isHidden])
[self setNeedsDisplay:YES];
}
-(void) drawRect:(NSRect)dirtyRect {
glClear(GL_COLOR_BUFFER_BIT);
}
You haven't indicated whether you will be drawing your OpenGL content within an NSOpenGLView or a CAOpenGLLayer. These two have slightly different ways of updating their content for display to the screen.
For an NSOpenGLView, you don't need to update the view within it's -drawRect: method. In fact, I think you won't want to trigger -setNeedsDisplay: to do a refresh of the NSView because of some overhead that might incur. In one of my applications, I use a CVDisplayLink to trigger updates at 60 FPS within my own custom rendering methods in an NSOpenGLView. None of these touch -drawRect:. Frames are presented to the screen upon calling [[self openGLContext] flushBuffer], not by forcing a redraw of the NSView.
CAOpenGLLayers are a little different, in that you override - drawInCGLContext:pixelFormat:forLayerTime:displayTime: with your custom rendering code. This method is triggered in response to a manual -setNeedsDisplay or by the CAOpenGLLayer itself if its asynchronous property is set to YES. It knows when it's ready to present new content by the boolean value you provide in response to -canDrawInCGLContext:pixelFormat:forLayerTime:displayTime:.
I've used both of these, and each has its advantages. CAOpenGLLayers make it much easier to overlay other UI elements on your OpenGL rendering, but their rendering methods can be difficult to get to work correctly from a background thread. NSOpenGLViews can be updated easily on a background thread using a CVDisplayLink, but are a bear to overlay content on.
Related
I'm experience the strangest thing. I'm working on a project that I started noticing this issue, so I created a sandboxed test to simplify the problem and see if I can figure it out.
I have document based application with a single window controller. inside that are these objects:
Subclassed NSScrollView with isFlipped=YES
Subclassed NSView with isFlipped=YES, this is the documentView in the above scrollview.
An NSImage in the document view
Then I try and fade the image to alpha 0 like this:
- (void)windowDidLoad {
[super windowDidLoad];
[NSTimer scheduledTimerWithTimeInterval:2 target:self selector:#selector(test) userInfo:nil repeats:FALSE];
}
- (void) test {
[NSAnimationContext beginGrouping];
[[NSAnimationContext currentContext] setDuration:4];
self.errorOutline.animator.alphaValue = 0;
[NSAnimationContext endGrouping];
}
This is how it looks in Xcode:
This is what happens when it starts animating:
I've noticed that if I resize the window continually while it's animating, I get a ghosting image like effect where I an see the rest of the image.
Another strange issue is that if I set isFlipped=NO, then the issue doesn't happen. Which is not an option - the whole reason I'm flipping the view is so it's easier to manage adding cells to it without calculating positions and heights backwards.
Update: I've filed a radar bug as this is a really strange issue. http://openradar.appspot.com/20680289
Any ideas why a screen saver using just a plain ScreenSaverView subclass with a CAEmitterLayer sublayer would render fine on the primary screen and choppy (as if every 2. frame renders there..) on the secondary screen..?
This is my initialization code:
- (id)initWithFrame:(NSRect)frame isPreview:(BOOL)isPreview
{
self = [super initWithFrame:frame isPreview:isPreview];
if (self)
{
CAEmitterLayer* emitterLayer = [MyEmitterFactory emitterLayer:self];
[self setWantsLayer:YES];
[self.layer addSublayer:emitterLayer];
[self setAnimationTimeInterval:1/2.0];
}
return self;
}
Everything else in this subclass is default (as provided by the Xcode template).
Funny enough, backingStoreType does sound like a good candidate to tweak in a ScreenSaverView subclass using CoreAnimation, alas all other modes except the default one are not to be used as per the docs..
(As the animation is powered by Core Animation it doesn't really matter what I put in setAnimationTimeInterval - or remove the call completely, as experiments have shown)
According to the documentation of NSView setWantsLayer:
To create a layer-hosting view, you must call setLayer: and supply your layer object before you call the setWantsLayer: method; the order of these method calls is crucial.
Furthermore: Which OS version is it? Does the choppiness also come up when the two displays are mirrored (or vice versa)?
I have an NSAnimationContext (just a scrolling view) that I would like to slow down and/or pause whenever the cursor enters the view. I have already implemented the detection for when this happens - now I just need to figure out how to slow down the animation that is already in process. I have figured out how to do this with CALayers - but I need to use the animator proxy unable to use several AppKit views within this animation so Core Animation will not work. Does anyone know how to do this? Is there a way to keep track of NSAnimationContexts and then change them later on?
Here is a subsection of my code, The first block is cyclicly called. Everytime one agnation completes the next will begin.
[NSAnimationContext runAnimationGroup:^(NSAnimationContext *context){
context.duration = pixels/speed;
[[currentTweetView animator] setFrame:endRect];
} completionHandler:^{
[currentTweetView removeFromSuperview];
currentTweetView = nil;
[self nextAnimationWithAnimationIndex:currentIndex];
}];
Here is the code in the mouseEntered: method. Whenever this is called, neither completionHandler is ever called and the app freezes.
[NSAnimationContext runAnimationGroup:^(NSAnimationContext *context){
[[[self.subviews objectAtIndex:0] animator] setFrame:finalRect];
context.duration = 100.0;
} completionHandler:^{
NSLog(#"done");
}];
Also, is there any way to end an NSAnimationContext early and not call the completion handler?
I think if you just set the property via the animator proxy again, under a different NSAnimationContext, it will replace the animation that was in progress. This would be analogous to retargeting the animation (e.g. to a new destination).
I have a UI where the content of an NSCollectionViewItem's View is drawn programmatically through CALayers. I am using a CAConstraintLayoutManager to keep the layot of the sublayers consistent when resizing, but I am getting very poor performance when doing so. It seems that resizing the window, which causes the resize of two CATextLayers so that they fit the root layer's width, and the repositioning of one CATextLayer so that it stays right-aligned, is causing the application to spend most of its time executing the CGSScanConvolveAndIntegrateRGB function (I have used the Time Profiler instrument).
The most "expensive" layer (the one that causes the most stuttering even if it's the only one displayed) is a wrapped multiline CATextLayer. I have absolutely no idea how to get better performance (I have tried not using a CAConstraintLayoutManager and going with layer alignments but I'm getting the same thing). Has anyone had this problem? Is there a way around it?
PS: I have subclassed the layout manager and disabled all the animations during the execution of - (void)layoutSublayersOfLayer:(CALayer *)layer by setting YES to kCATransactionDisableActions in the CATransaction but it doesn't seems to help.
Edit: I have disabled Font Smoothing for the Text Layers and performance has increased a little bit (very little), but it spends an awful amount of time in _ZL9view_drawP7_CAViewdPK11CVTimeStampb (which is something that gets called by a thread of the ATI Radeon driver, I suppose).
I solved it. Kind of. It still seems like a dirty hack to me, but I couldn't find out how to make setNeedsDisplayInRect work so I ended up doing it like this:
In the NSWindow delegate:
-(void)windowWillStartLiveResize:(NSNotification *)notification
{
[[NSNotificationCenter defaultCenter] postNotificationName:#"beginResize" object:nil];
}
-(void)windowDidEndLiveResize:(NSNotification *)notification
{
[[NSNotificationCenter defaultCenter] postNotificationName:#"endResize" object:nil];
}
In my Custom View those two notifications call, respectively, the -(void)beginResize and -(void)endResize selectors. The first one sets a BOOL inLiveResize variable to YES, while the second one sets it to NO and calls setFrameSize again with the new frame size.
I overrode (overridden? Not native english speaker, sorry) the -(void)setFrameSize:(NSSize)newSize method like this:
-(void)setFrameSize:(NSSize)newSize
{
if (inLiveResize) {
NSRect scrollFrame = [[[self superview] enclosingScrollView] documentVisibleRect];
BOOL condition1 = (self.frame.origin.y > (scrollFrame.origin.y - self.frame.size.height));
BOOL condition2 = (self.frame.origin.y < (scrollFrame.origin.y + scrollFrame.size.height + self.frame.size.height));
if (condition1 && condition2)
[super setFrameSize:newSize];
}
else {
[super setFrameSize:newSize]; }}
That's it. This way, only the visible views resize live with the window, while the others get redrawn at the end of the operation. It works, but I don't like how 'dirty' it is, I'm sure there is a more elegant, built-in(ish) way to do this by using the setNeedsDisplayInRect method. I will research more.
Whenever I try to create a custom window using NSBorderlessWindowMask and set an NSView (for example an NSImageView) as its contentView, I get a 1px gray border around the NSView and I don't seem to be able to get rid of it.
I have followed several approaches including Apple's RoundTransparentWindow sample code as well as several suggestions on StackOverflow.
I suspect the gray border is either coming from the window itself or the NSView.
Have any of you experienced this problem or do you have a possible solution?
The code is fairly straightforward. This is the init method of the custom window:
- (id)initWithContentRect:(NSRect)contentRect styleMask:(NSUInteger)aStyle backing:(NSBackingStoreType)bufferingType defer:(BOOL)flag {
self = [super initWithContentRect:contentRect styleMask:NSBorderlessWindowMask backing:NSBackingStoreBuffered defer:YES];
if (self != nil) {
[self setAlphaValue:1.0];
[self setBackgroundColor:[NSColor clearColor]];
[self setOpaque:NO];
}
return self;
}
To test this, in IB I place an NSImageView in that custom window WITHOUT border and yet the image in the NSImageView has a border. The same goes for other NSView subclasses, such as NSTextField, NSTableView.
In addition, I also noticed that the same is happening with the sample application (RoundTransparentWindow) of Apple. Is it even possible to draw an NSView in a custom window without a 1px border?
Thanks
Are you sure this happens when you use a regular NSView with no drawing? I bet not. Other controls (like NSImageView)have borders. Maybe you should double check to make sure they're turned off whe possible.
Update - How do you get your view into your window? You don't include that code. I created a basic test project (download it here) with an image well and it works just fine. See for yourself.