I am in the process of converting some old OpenGL code to Metal.
At the moment I am using a MTKView to render a memory buffer to a window. I'm using it with paused = YES, enableSetNeedsDisplay = NO, and manual calls to draw() from my rendering loop.
Everything appears to be working, except for the fact that I am limited to 60 frames per second for no obvious reason. I suspect that Metal is synchronising to the monitor refresh when I don't want it to.
When I resize the window my frame rate temporarily jumps to 150+ frames per second, which tells me that the limit is not of my making.
Does anyone know how to stop this frame rate limit? I have tried setting preferredFramesPerSecond to different values (both lower and higher) but this seems to have no effect.
Thanks in advance for any pointers.
Typically enough I figured it out a few minutes after asking the question:
CAMetalLayer *c = self.layer;
c.displaySyncEnabled = NO;
Related
In Windows World, a dedicated render thread would loop something similar to this:
void RenderThread()
{
while (!quit)
{
UpdateStates();
RenderToDirect3D();
// Can either present with no synchronisation,
// or synchronise after 1-4 vertical blanks.
// See docs for IDXGISwapChain::Present
PresentToSwapChain();
}
}
What is the equivalent in Cocoa with CAMetalLayer? All the examples deal with updates being done in the main thread, either using MTKView (with it's internal timer) or using CADisplayLink in the iOS examples.
I want to be in control of the whole render loop, rather than just receiving a callback at some non-specified interval (and ideally blocking for V-Sync if it's enabled).
At some level, you're going to be throttled by the availability of drawables. A CAMetalLayer has a fixed pool of drawables available, and calling nextDrawable will block the current thread until a drawable becomes available. This doesn't imply you have to call nextDrawable at the top of your render loop, though.
If you want to draw on your own schedule without getting blocked waiting on a drawable, render to an off-screen renderbuffer (i.e., a MTLTexture with dimensions matching your drawable size), and then blit from the most-recently-drawn texture to a drawable's texture and present on whatever cadence you prefer. This can be useful for getting frame timings, but every frame you draw and then don't display is wasted work. It also increases the risk of judder.
Your options are limited when it comes to getting callbacks that match the v-sync cadence. Your best is almost certainly a CVDisplayLink scheduled in the default and tracking run loop modes, though this has caveats.
You could use something like a counting semaphore in concert with a display link if you want to free-run without getting too far ahead.
If your application is able to maintain a real-time framerate, you'll normally be rendering a frame or two ahead of what's going on the glass, so you don't want to literally block on v-sync; you just want to inform the window server that you'd like presentation to match v-sync. On macOS, you do this by setting the layer's displaySyncEnabled to true (the default). Turning this off may cause tearing on certain displays.
At the point where you want to render to screen, you obtain the drawable from the layer by calling nextDrawable. You obtain the drawable's texture from its texture property. You use that texture to set up the render target (color attachment) of a MTLRenderPassDescriptor. For example:
id<CAMetalDrawable> drawable = layer.nextDrawable;
id<MTLTexture> texture = drawable.texture;
MTLRenderPassDescriptor *desc = [MTLRenderPassDescriptor renderPassDescriptor];
desc.colorAttachments[0].texture = texture;
From here, it's pretty similar to what you do in an MTKView's drawRect: method. You create a command buffer (if you don't already have one), create a render command encoder using the descriptor, encode drawing commands, end encoding, tell the command buffer to present the drawable (using a -presentDrawable:... method), and commit the command buffer. Whatever was drawn to the drawable's texture is what will end up on-screen when it's presented.
I agree with Warren that you probably don't really want to sync your loop with the display refresh. You want parallelism. You want the CPU to be working on the next frame while the GPU is rendering the most current frame (and the display is showing the last frame).
The fact that there's a limit on how many drawables may be in flight at once and that nextDrawable will block waiting for one will prevent your render loop from getting too far ahead. (You'll probably use some other synchronization before that, like for managing a small pool of buffers.) If you want only double-buffering and not triple-buffering, you can set the layer's maximumDrawableCount to 2 instead of its default value of 3.
I am developing an aframe project on my MacBook pro, late 2013. When running the project, the fan of my computer always spins fast, regardless which browser I use (firefox, safari, chrome) and the project size (also happens with a project just containing a simple a-box).
aframe-stats show me that my project (1028244 vertices, 342748 faces) still runs with 20 fps.
Is it somehow possible to limit the frame rate to 10fps in order to keep my computer quite? Or any other way to limit the flop-consumption of the aframe project? I already tried a native approach with sudo cputhrottle plugin-container 10 but that did not just throttle the aframe-renderer but the whole firefox browser. Can I pull the break somewhere in the JavaScript or the Browser settings?
It's difficult to say without your project code. Large data sets will simply crank out even a high spec macbook pro. I have found it helpful to pause any rendering whenever possible to quiet the users' machines.
I personally removed automated next animation frame rendering in favor of waiting for controls and objects to change.
For example:
this.controls.addEventListener( 'change', function(e){ addToRenderStack(); });
A simple function addtorenderstack puts in a new value in a list for a render, with the expectation that the render will occur at some point in the future and not right away. the list can also be used to log who requested the render in the call stack, and narrow down performance hogs.
addtorenderstack places a render request in a list. In the requestanimationframe loop, if the list has any length, a render is called on the scene. The stack is immediately cleared rather than processed one by one. If controls or animations continue to make render requests, the list will have a length again and request animationframe will process them in the same way with another render.
In this way, the code only renders when absolutely required. This saved me much grinding on framerate and the fans only come on during intensive operations and then shutdown when its complete, much like a typical 3d game experience.
Your mileage may vary depending on what's happening in your app. I work in engineering so often the view of the 3d world is stopped as an engineer examines or shows a model.
I am making a drawing on NSView using a timer that is set to update every .02 seconds. On update some physical simulation makes a step, and then Canvas!.needsDisplay = true. It works when app is in foreground (usually), but when some lags happen, simulation progresses forward despite the fact that view hasn't reflected it yet. How do I pause the timer during these times to make simulation happen only when NSView can show it? I do not want to call step_over from inside drawRect, cause it seems like a bad idea, because then it would be harder to stop the simulation.
Generally this kind of update should be done the other way around, by letting the display ask you for frames as it can display them. This is done with a CADisplayLink CVDisplayLink (this is Mac; CADisplayLink is iOS). Configure it with a method you want to be called when a frame can be drawn.
Generally you do want your simulation to keep moving forward, even if it means dropping frames occasionally. For that, you check the timestamp and use that to work out what time to use for your new frame. But if you only want to move forward when the display can show it, then just update once per call.
Note that generating at 50fps is often going to mismatch the system that's trying to draw at 60fps, so you're going to wind up missing frames occasionally. That's one of several reasons not to try to push drawing with a timer.
See also Alternative of CADisplayLink for Mac OS X. Note that trying to draw at 50fps with Core Graphics usually isn't going to give good results in any case. The right tool here in OS X is Core Animation (or SpriteKit for games on 10.10, or OpenGL for more advanced high-speed rendering). You can do very basic animations with an NSTimer (and we did for years before Core Animation came along), but it's not really a tool for complex drawing.
I am trying to create an application with an OpenGL view using MonoMac. Setting up an application and an NSOpenGLView was fairly simple...
...but for some reason I cannot get a consistent frame rate rendering OpenGL. The issue I am having is that 9 out of ten frames have perfect performance and every tenth frame or so I am getting a massive frame time spike (about 60ms-80ms for a single frame). The time of the slow frame correlates with the size of the control (and even more so using retina backing buffer).
I have been digging and have come up with nothing that works for my case.
I tried to use NSOpenGLView with CVDisplayLink and rendering on the main thread with timers and DrawRect.
I tried MonoMacGameView also both versions. Actually MonoMacGameView has consistent performance but only draws when my window does not have a background color.
I reimplemented the run loop to do my own NextEvent polling just to find out that that is not the issue...
So, my current hunch is that it has something to do with layer backed rendering in Cocoa views but I really cannot figure out what is causing this.
Any hint as to what is causing this delay?
I found a solution which produces pretty good results:
Do not use NSOpenGLView or MonoMacGameView.
Use the approach described in the example on this page: http://cocoa-mono.org/archives/366/monomac-and-opengl/
To enable retina support export the ContentScale property on the deriving class and set that depending whether you are running on retina screen or not
In conclusion, using a core animation layer is the only viable solution.
I am trying to monitor a stream of video output from a FireWire camera. I have created an Interface Builder interface with buttons and an NSImageView. While image monitoring is occurring within an endless loop, I want to:
change some camera parameters on the fly (gain, gamma, etc.)
tell the monitoring to stop so I can save an image to a file (set a flag that stops the while loop)
Using the button features, I have been unable to loop the video frame monitor, while still looking for a button press (much like using the keypressed feature from C.) Two options present themselves:
Initiate a new run loop (for which I cannot get an autoreleasepool to function ...)
Initiate an NSOperation - how do I do this in a way which allows me to connect with an Xcode button push?
The documentation is very obtuse about the creation of such objects. If I create an NSOperation as per the examples I've found, there seems to be no way to communicate with it with an object from Interface Builder. When I create an NSRunLoop, I get an object leak error, and I can find no example of how to create an autoreleasepool that actually responds to the RunLoop I've created. Nevermind that I haven't even attempted to choose which objects get sampled by the secondary run loop ...
Because Objective C is (obviously!) not my native tongue, I am looking for solutions with baby steps, sorry to say ...
Thanks in advance
I've needed to do almost exactly the same as you, only with a continuous video display from the FireWire camera. In my case, I used the libdc1394 library to perform the frame capture and camera property adjustment for our FireWire cameras. I know you can also do this using some of the Carbon Quicktime functions, but I found libdc1394 to be a little easier to understand.
For the video capture loop, I tried a number of different approaches, from a separate thread that polls the camera and has locks around shared resources, to using one NSOperationQueue for interaction with the camera, and finally settled on using a CVDisplayLink to poll the camera in a way that matches the refresh rate of the screen.
The CVDisplayLink is configured using the following code:
CGDirectDisplayID displayID = CGMainDisplayID();
CVReturn error = kCVReturnSuccess;
error = CVDisplayLinkCreateWithCGDisplay(displayID, &displayLink);
if (error)
{
NSLog(#"DisplayLink created with error:%d", error);
displayLink = NULL;
}
CVDisplayLinkSetOutputCallback(displayLink, renderCallback, self);
and it calls the following function to trigger the retrieval of a new camera frame:
static CVReturn renderCallback(CVDisplayLinkRef displayLink,
const CVTimeStamp *inNow,
const CVTimeStamp *inOutputTime,
CVOptionFlags flagsIn,
CVOptionFlags *flagsOut,
void *displayLinkContext)
{
return [(SPVideoView *)displayLinkContext renderTime:inOutputTime];
}
The CVDisplayLink is started and stopped using the following:
- (void)startRequestingFrames;
{
CVDisplayLinkStart(displayLink);
}
- (void)stopRequestingFrames;
{
CVDisplayLinkStop(displayLink);
}
Rather than using a lock on the FireWire camera communications, whenever I need to adjust the exposure, gain, etc. I change corresponding instance variables and set the appropriate bits within a flag variable to indicate which settings to change. On the next retrieval of a frame, the callback method from the CVDisplayLink changes the appropriate settings on the camera to match the locally stored instance variables and clears that flag.
Display to the screen is handled through an NSOpenGLView (CAOpenGLLayer introduced too many visual artifacts when updating at this rate, and its update callbacks ran on the main thread). Apple has some extensions you can use to provide these frames as textures using DMA for better performance.
Unfortunately, nothing that I've described here is introductory-level stuff. I have about 2,000 lines of code for these camera-handling functions in our software and this took a long time to puzzle out. If Apple could add the manual camera settings adjustments to the QTKit Capture APIs, I could remove almost all of this.
If all you're trying to do is see/grab the output of a connected camera, the answer is probably neither.
Use QTKit's QTCaptureView. Problem solved. Want to grab a frame? Also no problem. Don't try to roll your own - QTKit's stuff is optimized and part of the OS. I'm pretty sure you can affect camera properties as you wanted but if not, plan B should work.
Plan b: Use a scheduled, recurring NSTimer to ask QTKit to grab a frame every so often ("how" linked above) and apply your image manipulations to the frame (maybe with Core Image) before displaying in your NSImageView.