This is a quote from the AnimationTimer class documentation.
The class AnimationTimer allows to create a timer, that is called in each frame while it is active. An extending class has to override the method handle(long) which will be called in every frame. The methods start() and stop() allow to start and stop the timer.
But I don't know how many frames are rendered each second, and when the handle method has been called ? before frame being rendered or after.
And is it bad idea to use too many AnimationTimer in my application(Game)?
How many frames javafx renders depends on the complexity of your program. The cap is at approx. 60 frames per second, which is a common fps border for applications. The method is called prior to the frame being displayed (you can check that by simply putting a breakpoint into the method).
Actually, it is a common use for the AnimationTimer to count the frames per second. This blog entry explains a lot:
http://tbeernot.wordpress.com/2011/11/12/javafx-2-0-bubblemark/
The AnimationTimer can be used for a wide range of applications, not just for animations. If it is a good or bad idea to use for your specific application cannot be determined without seeing the code itself. but for usages of AnimationTimer, this is a good source to read:
http://blog.netopyr.com/2012/06/14/using-the-javafx-animationtimer/
Related
In Windows World, a dedicated render thread would loop something similar to this:
void RenderThread()
{
while (!quit)
{
UpdateStates();
RenderToDirect3D();
// Can either present with no synchronisation,
// or synchronise after 1-4 vertical blanks.
// See docs for IDXGISwapChain::Present
PresentToSwapChain();
}
}
What is the equivalent in Cocoa with CAMetalLayer? All the examples deal with updates being done in the main thread, either using MTKView (with it's internal timer) or using CADisplayLink in the iOS examples.
I want to be in control of the whole render loop, rather than just receiving a callback at some non-specified interval (and ideally blocking for V-Sync if it's enabled).
At some level, you're going to be throttled by the availability of drawables. A CAMetalLayer has a fixed pool of drawables available, and calling nextDrawable will block the current thread until a drawable becomes available. This doesn't imply you have to call nextDrawable at the top of your render loop, though.
If you want to draw on your own schedule without getting blocked waiting on a drawable, render to an off-screen renderbuffer (i.e., a MTLTexture with dimensions matching your drawable size), and then blit from the most-recently-drawn texture to a drawable's texture and present on whatever cadence you prefer. This can be useful for getting frame timings, but every frame you draw and then don't display is wasted work. It also increases the risk of judder.
Your options are limited when it comes to getting callbacks that match the v-sync cadence. Your best is almost certainly a CVDisplayLink scheduled in the default and tracking run loop modes, though this has caveats.
You could use something like a counting semaphore in concert with a display link if you want to free-run without getting too far ahead.
If your application is able to maintain a real-time framerate, you'll normally be rendering a frame or two ahead of what's going on the glass, so you don't want to literally block on v-sync; you just want to inform the window server that you'd like presentation to match v-sync. On macOS, you do this by setting the layer's displaySyncEnabled to true (the default). Turning this off may cause tearing on certain displays.
At the point where you want to render to screen, you obtain the drawable from the layer by calling nextDrawable. You obtain the drawable's texture from its texture property. You use that texture to set up the render target (color attachment) of a MTLRenderPassDescriptor. For example:
id<CAMetalDrawable> drawable = layer.nextDrawable;
id<MTLTexture> texture = drawable.texture;
MTLRenderPassDescriptor *desc = [MTLRenderPassDescriptor renderPassDescriptor];
desc.colorAttachments[0].texture = texture;
From here, it's pretty similar to what you do in an MTKView's drawRect: method. You create a command buffer (if you don't already have one), create a render command encoder using the descriptor, encode drawing commands, end encoding, tell the command buffer to present the drawable (using a -presentDrawable:... method), and commit the command buffer. Whatever was drawn to the drawable's texture is what will end up on-screen when it's presented.
I agree with Warren that you probably don't really want to sync your loop with the display refresh. You want parallelism. You want the CPU to be working on the next frame while the GPU is rendering the most current frame (and the display is showing the last frame).
The fact that there's a limit on how many drawables may be in flight at once and that nextDrawable will block waiting for one will prevent your render loop from getting too far ahead. (You'll probably use some other synchronization before that, like for managing a small pool of buffers.) If you want only double-buffering and not triple-buffering, you can set the layer's maximumDrawableCount to 2 instead of its default value of 3.
I am making a drawing on NSView using a timer that is set to update every .02 seconds. On update some physical simulation makes a step, and then Canvas!.needsDisplay = true. It works when app is in foreground (usually), but when some lags happen, simulation progresses forward despite the fact that view hasn't reflected it yet. How do I pause the timer during these times to make simulation happen only when NSView can show it? I do not want to call step_over from inside drawRect, cause it seems like a bad idea, because then it would be harder to stop the simulation.
Generally this kind of update should be done the other way around, by letting the display ask you for frames as it can display them. This is done with a CADisplayLink CVDisplayLink (this is Mac; CADisplayLink is iOS). Configure it with a method you want to be called when a frame can be drawn.
Generally you do want your simulation to keep moving forward, even if it means dropping frames occasionally. For that, you check the timestamp and use that to work out what time to use for your new frame. But if you only want to move forward when the display can show it, then just update once per call.
Note that generating at 50fps is often going to mismatch the system that's trying to draw at 60fps, so you're going to wind up missing frames occasionally. That's one of several reasons not to try to push drawing with a timer.
See also Alternative of CADisplayLink for Mac OS X. Note that trying to draw at 50fps with Core Graphics usually isn't going to give good results in any case. The right tool here in OS X is Core Animation (or SpriteKit for games on 10.10, or OpenGL for more advanced high-speed rendering). You can do very basic animations with an NSTimer (and we did for years before Core Animation came along), but it's not really a tool for complex drawing.
I'm trying to benchmark the loading of large images in Corona SDK.
local startTime = system.getTimer()
local myImage = display.newImageRect( "someImage.jpg", 1024, 768 )
local endTime = system.getTimer()
print( endTime - startTime ) -- prints 8.4319999999998
This returns values of around 8 ms. I know it takes longer to load an display an image because if it really took 8 ms I wouldn't notice the delay, but I do notice it. I'd say it takes about 300 ms.
Also the FPS drop drastically when loading a large image. I'm monitoring this using an enterFrame event and when loading the image it prints values of around 0.3 for 1 frame.
Runtime:addEventListener( "enterFrame", myListener )
function onEnterFrame (event)
print( display.fps )
end
The frame takes a long time to render when loading, even when the loading of the image takes less than 1/60 of a second. I guess it means the rendering is happening asynchronously somewhere else.
So, how can I measure the time it takes to really load and display an image?
Since Corona SDK is closed source, we'll have to use the docs and imagination.
I see three possibilities here:
Corona is doing what it says, and your subjective experience is wrong.
Corona is loading the images in a background thread, so the call to display.newImageRect is non-blocking: it "starts" loading the image, and then continues. When this happens in other SDKs (mostly javascript-based ones) you get a "ready callback" that you can use on the image object, but I could not find such thing in the docs.
Corona loads the image quickly, but requires "extra work afterwards". For example, it generates lots of garbage which has to be garbage-collected. So the image gets loaded fast, but then this "extra work" slows down the app.
My bet is on 3. But this doesn't really matter. Independently of which one of these options is causing the slowdowns, they can be solved the same way: instead of loading the images right before you draw them, you have to preload them.
I don't use Corona SDK, but a quick google pointed me to the storyboard module, in particular to storyboard.loadScene.
Create a new scene, list all the images that you need on it, and load it before showing it - that way image loading will be done in advance, not slowing down your app.
Most likely the image is rendered during the scene's rendering loop. There is no event to indicate that an image has been rendered. However if you create the display object in the scene's create event handler or a button click handler, and register an enterFrame event handler, you can measure the time between that and the first frame event. I can't try this here but my guess is this will give you an estimate of the time to render the image. Dont use FPS. Larger image will probably give you a larger measurement. If you measure the time between enterFrame events you will probably find that it is much smaller than the time between create/click event and the first frame event, or between the first two frame events after the create/click event. Post a comment if you would like to see some example code.
I am trying to monitor a stream of video output from a FireWire camera. I have created an Interface Builder interface with buttons and an NSImageView. While image monitoring is occurring within an endless loop, I want to:
change some camera parameters on the fly (gain, gamma, etc.)
tell the monitoring to stop so I can save an image to a file (set a flag that stops the while loop)
Using the button features, I have been unable to loop the video frame monitor, while still looking for a button press (much like using the keypressed feature from C.) Two options present themselves:
Initiate a new run loop (for which I cannot get an autoreleasepool to function ...)
Initiate an NSOperation - how do I do this in a way which allows me to connect with an Xcode button push?
The documentation is very obtuse about the creation of such objects. If I create an NSOperation as per the examples I've found, there seems to be no way to communicate with it with an object from Interface Builder. When I create an NSRunLoop, I get an object leak error, and I can find no example of how to create an autoreleasepool that actually responds to the RunLoop I've created. Nevermind that I haven't even attempted to choose which objects get sampled by the secondary run loop ...
Because Objective C is (obviously!) not my native tongue, I am looking for solutions with baby steps, sorry to say ...
Thanks in advance
I've needed to do almost exactly the same as you, only with a continuous video display from the FireWire camera. In my case, I used the libdc1394 library to perform the frame capture and camera property adjustment for our FireWire cameras. I know you can also do this using some of the Carbon Quicktime functions, but I found libdc1394 to be a little easier to understand.
For the video capture loop, I tried a number of different approaches, from a separate thread that polls the camera and has locks around shared resources, to using one NSOperationQueue for interaction with the camera, and finally settled on using a CVDisplayLink to poll the camera in a way that matches the refresh rate of the screen.
The CVDisplayLink is configured using the following code:
CGDirectDisplayID displayID = CGMainDisplayID();
CVReturn error = kCVReturnSuccess;
error = CVDisplayLinkCreateWithCGDisplay(displayID, &displayLink);
if (error)
{
NSLog(#"DisplayLink created with error:%d", error);
displayLink = NULL;
}
CVDisplayLinkSetOutputCallback(displayLink, renderCallback, self);
and it calls the following function to trigger the retrieval of a new camera frame:
static CVReturn renderCallback(CVDisplayLinkRef displayLink,
const CVTimeStamp *inNow,
const CVTimeStamp *inOutputTime,
CVOptionFlags flagsIn,
CVOptionFlags *flagsOut,
void *displayLinkContext)
{
return [(SPVideoView *)displayLinkContext renderTime:inOutputTime];
}
The CVDisplayLink is started and stopped using the following:
- (void)startRequestingFrames;
{
CVDisplayLinkStart(displayLink);
}
- (void)stopRequestingFrames;
{
CVDisplayLinkStop(displayLink);
}
Rather than using a lock on the FireWire camera communications, whenever I need to adjust the exposure, gain, etc. I change corresponding instance variables and set the appropriate bits within a flag variable to indicate which settings to change. On the next retrieval of a frame, the callback method from the CVDisplayLink changes the appropriate settings on the camera to match the locally stored instance variables and clears that flag.
Display to the screen is handled through an NSOpenGLView (CAOpenGLLayer introduced too many visual artifacts when updating at this rate, and its update callbacks ran on the main thread). Apple has some extensions you can use to provide these frames as textures using DMA for better performance.
Unfortunately, nothing that I've described here is introductory-level stuff. I have about 2,000 lines of code for these camera-handling functions in our software and this took a long time to puzzle out. If Apple could add the manual camera settings adjustments to the QTKit Capture APIs, I could remove almost all of this.
If all you're trying to do is see/grab the output of a connected camera, the answer is probably neither.
Use QTKit's QTCaptureView. Problem solved. Want to grab a frame? Also no problem. Don't try to roll your own - QTKit's stuff is optimized and part of the OS. I'm pretty sure you can affect camera properties as you wanted but if not, plan B should work.
Plan b: Use a scheduled, recurring NSTimer to ask QTKit to grab a frame every so often ("how" linked above) and apply your image manipulations to the frame (maybe with Core Image) before displaying in your NSImageView.
I've made an animation with SVG. It's like a slowly changing wallpaper. The idea is that you should barely notice it is changing.
It's purely decorative, and I don't want it to drain any resources. Is there a way to set the frame rate in SVG? I thought setting it to a very low number might do the trick? I'm using Raphael, by the way.
Deep in Raphael's guts, you will find the logic that controls the frame rate for non-keyframe animations:
animationElements[length] && setTimeout(animation);
By omitting an actual timeout value, Raphael is basically telling the browser to run the method as fast as it can (within scheduling constraints provided by the DOM specification and the browser implementation). You could either tweak that function to use a user-supplied parameter (or put a number there, though that will affect all animations), or use Peter's suggestion.