- [CAMetalLayer nextDrawable] issue on OSX 10.11 Beta 2 - macos

Whenever I add the CAMetalLayer to the NSView, the [CAMetalLayer nextDrawable] method will pass nil after 3 successful id<CAMetalDrawable>.
I tried two different ways to configure the setup. One, I used MTKView and used its CAMetalLayer, it didn't work. Second, used NSView and created new CAMetalLayer. That didn't work also. I had weird issues.
I want to know anyone else is having this problem and if anyone know a solution to resolve this.
Additional notes:
I don't want to use MTKView draw system by overriding its methods (not yet). Also this is not a problem on iOS 8 and I didn't tried my code with the beta release of iOS 9 (not yet).
Update
I reroute my drawable calls to use the MTKViewDelegate delegate. And from the drawInView delegate method, I was able to retrieve consistent drawable frames. However, I still would like to use the nextDrawable method directly from CAMetalLayer. Hope this helps anyone else.

I had this same issue, and asked Metal devs at WWDC 15.
How MTKView works: MTKView has limited number of drawables (probably 3), so when you are encoding frames there are few drawables you can draw to.
What you are doing: Your scene is probably quite simple, so you CPU can encode frames really fast. So it seems like, when CPU is 4 frames ahead of GPU, you ask for next drawable and since all (3) drawables are in use, it fails.
Solution: You need to use semapthore to wait for drawable when there is no free one.
Here's code to use:
let inflightSemaphore = dispatch_semaphore_create(3)
func drawInView(view: MTKView) {
// use semaphore to encode 3 frames ahead
dispatch_semaphore_wait(inflightSemaphore, DISPATCH_TIME_FOREVER)
self.update()
let commandBuffer = commandQueue.commandBuffer()
let renderEncoder = commandBuffer.renderCommandEncoderWithDescriptor(view.currentRenderPassDescriptor!)
renderEncoder.drawPrimitives()
// use completion handler to signal the semaphore when this frame is completed allowing the encoding of the next frame to proceed
commandBuffer.addCompletedHandler{ [weak self] commandBuffer in
if let strongSelf = self {
dispatch_semaphore_signal(strongSelf.inflightSemaphore)
}
return
}
commandBuffer.presentDrawable(view.currentDrawable!)
commandBuffer.commit()
}
This is not documented anywhere! The only written mention of that is in iOS project template (File -> New -> Project -> Game -> Pick Metal) in GameViewController.
I already filled a radar on this (no response yet), and would appreciate if you do the same https://bugreport.apple.com
Also you may find useful my github repo https://github.com/haawa799/RamOnMetal

Use #autoreleasepool for render in drawable.
https://forums.developer.apple.com/thread/15102

I forgot to come back to this.
This was fixed with OSX Beta 4. nextDrawable method is working correctly and passing back usable CAMetalDrawable objects. I guess I should've waited until the release version came out before posting this. I just wanted to let everyone else know about this problem back when the beta version was first released.

Related

Swift 2 SpriteKit issues

So I am having some issues with my spriteKit game since upgrading to iOS 9 and even after upgrading to Swift 2. I mentioned 1 here Atlas images wrong size on iPad iOS 9
However I am having 2 more issues I cannot fix.
1)
All my particle effects dont work anymore. This is the code I use and they are just not showing up anymore. If I just use SKEmitterNode than it works, but I prefer to add the SKEmitterNode to a SKEffectNode as it blends much better with backgrounds etc.
This is the code.
let particlesPath = NSBundle.mainBundle().pathForResource("Thrusters", ofType: "sks")!
let particles = NSKeyedUnarchiver.unarchiveObjectWithFile(particlesPath) as! SKEmitterNode
let particleEffects = SKEffectNode() //blends better with background when moving
particleEffects.zPosition = 20
particleEffects.position = CGPointMake(0, -50)
particleEffects.addChild(particles)
addChild(particleEffects)
I read this
http://forum.iphonedev.tv/t/10-8-skeffectnode-or-xcode-7-or-my-issue/669
and it claims it was fixed, but it wasn't.
2)
My Game Center banners when I log in or when an achievement pops are now using the portrait banner, even though my game is in landscape. Therefore banners only cover half the top screen. It looks so bad and since there is no actually code for banners I dont even know where to start.
Anyone else facing these issues, its frustrating.
Thanks for any help or support.
Some updates to this old question. Believe it or not in regards to the particles, apple recently replied to my 1 year old BugReport to see if it is fixed in iOS 10. LOL
I heard rendering particles via a SKEffectNode way is not necessarily ideal in regards to performance and I am not using it anymore. Therefore I am not sure if the bug is still occurring with the later Xcode and iOS 9 updates or in iOS 10 beta.
In regards to the adMob banner, I simply had to change
let adMobBannerAdView = GADBannerView()
to
var adMobBannerAdView: GADBannerView?
and delay initialisation until ViewDidLoad/DidMoveToView.

AVPlayer seekToTime spinning pizza

I have a simple AVPlayer-based OS X app that plays local media. It has a skip forward and backward feature, based on -seekToTime:. On some media, there is an annoying 3-7 second delay in getting the media to continue playing (especially going forward). I have tried - seekToTime:toleranceBefore:toleranceAfter: , with variable tolerances. No luck.
Posting previously solved issue for the record... I noticed that the seekToTime: skipping worked fine when the playback was paused. I immediately (i.e., several weeks later) realized that it might make sense to stop the playback before seeking, then restarting. So far, problem is 100% solved, and it is blazing fast. Might be of some use to people trying to do smooth looping (but I don't know how to trigger the completion handler signaling the end of the loop). Don't know if it works with iOS. Sample code is attached:
-(void) movePlayheadToASpecificTime
{
// assumes this is a method for an AVPlayerView subclass, properly wired with IB
// self.player is a property of AVPlayerView, which points to an AVPlayer
// save the current rate
float currentPlayingRate = self.player.rate;
// may need fancier tweaking if the time scale changes within asset
int32_t timeScale = self.player.currentItem.asset.duration.timescale;
// stop playback
self.player.rate = 0;
Float64 desiredTimeInSeconds = 4.5; // or whatever
// convert desired time to CMTime
CMTime target = CMTimeMakeWithSeconds(desiredTimeInSeconds, timeScale);
// perform the move
[self.player seekToTime:target];
// restore playing rate
self.player.rate = currentPlayingRate;
}

Buffer issue with an OpenGL ES 3D view in a Cocos2D app

I'm trying to insert an OpenGL ES 3D view in a Cocos2D app on the IPad. I'm relatively new to these frameworks, so I basically added those lines in my CCLayer:
CGRect rScreen;
// some code to define the bounds and origin of my frame
EAGL3DView * view3d = [[EAGL3DView alloc] initWithFrame:rScreen] ;
[[[CCDirector sharedDirector] openGLView] addSubview: view3d];
[view3d startAnimation]
The code I'm using for the 3D part is based on a sample code from Apple Developer : http://developer.apple.com/library/mac/#samplecode/GLEssentials/Introduction/Intro.html
The only changes I made were to create my view programmatically (no xib file, initWithCoder -> initWithFrame...), and I also renamed the EAGLView class & files to EAGL3DView so as not to interfere with the EAGLView that comes along with Cocos2D.
Now onto my problem: when I run these, I get an "OpenGL error 0x0502 in -[EAGLView swapBuffers]", the 3D view being properly displayed but with a completely pink screen otherwise.
I went into the swapBuffers function in Cocos2d EAGLView, and it turns out the only block of code that is important is this one:
if(![context_ presentRenderbuffer:GL_RENDERBUFFER_OES])
CCLOG(#"cocos2d: Failed to swap renderbuffer in %s\n", __FUNCTION__);
which btw does not enter the "if" condition (presentRenderbuffer does not return a null value, but is not correct though since the CHECK_GL_ERROR() afterwards gives an 0x0502 error).
So I understand that there is some kind of incorrect overriding of the OpenGL ES renderbuffer by my 3D view (since Cocos2d also uses OpenGL ES) causing the Cocos2D view not to work properly. This is what I got so far, and I can't figure out precisely what needs to be done in order to fix it. So what do you think?
Hoping this is only a newbie problem…
Pixelvore
I think the correct approach for what you are trying to do is:
create your own custom CCSprite/CCNode class;
put all the GL code that you are using from the Apple sample into that class (i.e., overriding the draw or visit method of the class).
If you want to try and make the two GL views work nicely together, you could try reading this post, which will explain how you associate different buffers to your views.
As to the first approach, have a look at this post and this one.
To be true, the first approach might be more complex (depending on how the Apple sample is doing open gl), but will use less memory and will be more optimized that the second.

Problem: Rendering stops with OpenGL on Mac OS X 10.6

I've been having problems and, after spending a week trying out all kinds of solutions and tearing my hair out, I've come here to see whether anybody could help me.
I'm working on a 3D browser plugin for the Mac (I have one that works on Windows). The only fully-hardware accelerated way to do this is to use a CAOpenGLLayer (or something that inherits from that). If a NSWindow is created and you attach the layer to that window's NSView then everything works correctly. But, for some reason, I can only get a specific number of frames (16) to render when passing the layer into the browser.
Cocoa calls my layer's drawInCGLContext for the first 16 frames. Then, for some unknown reason, it stops calling it. 16 seems like a very specific - and programmatic - number of frames and so I wondered whether anybody had any insight into why drawInCGLContext would not be called after 16 frames?
I'm reasonably sure it's not because I pass the layer into the browser - I've created a very minimal example plugin that renders a rotating quad using CAOpenGLLayer and that actually works. But the full plugin is a lot more complicated than that and I just don't know where to look anymore. I just don't know why drawInCGLContext stops being called. I've tried forcing it using CATransaction, it definitely gets sent the setNeedsDisplay message - but drawInCGLContext is never called. OpenGL doesn't report any errors either (I'm currently checking the results of all OpenGL calls). I'm confused! Help?
So, for anybody else who has this problem in future: You're trying to draw using the OpenGL context outside of the drawInCGLContext. There was a bug in the code where nearly all the drawing was happening from the correct place (drawInCGLContext) but one code path led to it rendering outside of there.
No errors are raised nor does glGetError return any problems. It just stops rendering. So if this happens to you - you're almost certainly making the same mistake I made!

Should I use NSOperation or NSRunLoop?

I am trying to monitor a stream of video output from a FireWire camera. I have created an Interface Builder interface with buttons and an NSImageView. While image monitoring is occurring within an endless loop, I want to:
change some camera parameters on the fly (gain, gamma, etc.)
tell the monitoring to stop so I can save an image to a file (set a flag that stops the while loop)
Using the button features, I have been unable to loop the video frame monitor, while still looking for a button press (much like using the keypressed feature from C.) Two options present themselves:
Initiate a new run loop (for which I cannot get an autoreleasepool to function ...)
Initiate an NSOperation - how do I do this in a way which allows me to connect with an Xcode button push?
The documentation is very obtuse about the creation of such objects. If I create an NSOperation as per the examples I've found, there seems to be no way to communicate with it with an object from Interface Builder. When I create an NSRunLoop, I get an object leak error, and I can find no example of how to create an autoreleasepool that actually responds to the RunLoop I've created. Nevermind that I haven't even attempted to choose which objects get sampled by the secondary run loop ...
Because Objective C is (obviously!) not my native tongue, I am looking for solutions with baby steps, sorry to say ...
Thanks in advance
I've needed to do almost exactly the same as you, only with a continuous video display from the FireWire camera. In my case, I used the libdc1394 library to perform the frame capture and camera property adjustment for our FireWire cameras. I know you can also do this using some of the Carbon Quicktime functions, but I found libdc1394 to be a little easier to understand.
For the video capture loop, I tried a number of different approaches, from a separate thread that polls the camera and has locks around shared resources, to using one NSOperationQueue for interaction with the camera, and finally settled on using a CVDisplayLink to poll the camera in a way that matches the refresh rate of the screen.
The CVDisplayLink is configured using the following code:
CGDirectDisplayID displayID = CGMainDisplayID();
CVReturn error = kCVReturnSuccess;
error = CVDisplayLinkCreateWithCGDisplay(displayID, &displayLink);
if (error)
{
NSLog(#"DisplayLink created with error:%d", error);
displayLink = NULL;
}
CVDisplayLinkSetOutputCallback(displayLink, renderCallback, self);
and it calls the following function to trigger the retrieval of a new camera frame:
static CVReturn renderCallback(CVDisplayLinkRef displayLink,
const CVTimeStamp *inNow,
const CVTimeStamp *inOutputTime,
CVOptionFlags flagsIn,
CVOptionFlags *flagsOut,
void *displayLinkContext)
{
return [(SPVideoView *)displayLinkContext renderTime:inOutputTime];
}
The CVDisplayLink is started and stopped using the following:
- (void)startRequestingFrames;
{
CVDisplayLinkStart(displayLink);
}
- (void)stopRequestingFrames;
{
CVDisplayLinkStop(displayLink);
}
Rather than using a lock on the FireWire camera communications, whenever I need to adjust the exposure, gain, etc. I change corresponding instance variables and set the appropriate bits within a flag variable to indicate which settings to change. On the next retrieval of a frame, the callback method from the CVDisplayLink changes the appropriate settings on the camera to match the locally stored instance variables and clears that flag.
Display to the screen is handled through an NSOpenGLView (CAOpenGLLayer introduced too many visual artifacts when updating at this rate, and its update callbacks ran on the main thread). Apple has some extensions you can use to provide these frames as textures using DMA for better performance.
Unfortunately, nothing that I've described here is introductory-level stuff. I have about 2,000 lines of code for these camera-handling functions in our software and this took a long time to puzzle out. If Apple could add the manual camera settings adjustments to the QTKit Capture APIs, I could remove almost all of this.
If all you're trying to do is see/grab the output of a connected camera, the answer is probably neither.
Use QTKit's QTCaptureView. Problem solved. Want to grab a frame? Also no problem. Don't try to roll your own - QTKit's stuff is optimized and part of the OS. I'm pretty sure you can affect camera properties as you wanted but if not, plan B should work.
Plan b: Use a scheduled, recurring NSTimer to ask QTKit to grab a frame every so often ("how" linked above) and apply your image manipulations to the frame (maybe with Core Image) before displaying in your NSImageView.

Resources