In older versions of MacOSX, one would use
UpdateSystemActivity(UsrActivity);
In order to reset the screensaver timer.
In modern versions of MacOSX, the most commonly recommended solution is:
static IOPMAssertionID activity_assertion_id = kIOPMNullAssertionID;
IOReturn r = IOPMAssertionDeclareUserActivity(CFSTR("FractalUserActivity"),
kIOPMUserActiveLocal, &activity_assertion_id);
or
IOReturn result = IOPMAssertionCreateWithName(
kIOPMAssertionTypeNoDisplaySleep,
kIOPMAssertionLevelOn,
CFSTR("FractalNewFrameActivity"),
&power_assertion_id);
IOPMAssertionRelease(power_assertion_id);
However, neither of these in my testing appear to actually reset the MacOSX Screensaver timer. When I set my screensaver to 1minute, but run the above code in a loop at 60FPS, the screensaver still turns on eventually.
Often it's said that IOPMAssertionRelease should only be called after the screen no longer wants to be held awake, but I don't think that's the functionality I need. I need to simply reset the 1min screensaver timer. Because, the timer should be reset every single time a framebuffer gets rendered [due to the application changing what's being displayed]. But, if no frame gets rendered in a 1minute interval, the screensaver should be displayed.
Is there no way to do this in modern versions of MacOSX? Chromium currently exhibits the exact feature that I desire. When watching a YouTube Video, 1minute after the video ends, is when the screensaver will turn on, regardless of how long the youtube video is, even if it's e.g. a 10min Youtube Video.
~ I don't want to hardcode 1minute, since obviously it should regardless of what your screensaver time setting is.
Related
I'm trying to get an animation to run smoothly, at the refresh rate of the display without screen tearing. The animation is rendered using Metal. As far as I understand Apple tells you to use CVDisplayLink based timers for this, and that is what I've done.
Everything works fine on desktop computers and on laptops when they are connected to a power adaptor. However, when the laptops run on battery, especially when they battery isn't full, I can see very noticeable stuttering in the animation. There is no tearing, though. It seems like the timer is not fired on every screen refresh.
I'm pretty sure this is not because the CPU is throttled down. The CPU utilisation is below 10% and the animation takes less than 2ms to calculate and render; and at 60Hz it would have 16ms to do so.
For what it's worth, this is how I set up the timer:
private func makeDisplayLink(window: NSWindow) -> CVDisplayLink
{
func displayLinkOutputCallback(_ displayLink: CVDisplayLink, _ inNow: UnsafePointer<CVTimeStamp>, _ inOutputTime: UnsafePointer<CVTimeStamp>, _ flagsIn: CVOptionFlags, _ flagsOut: UnsafeMutablePointer<CVOptionFlags>, _ displayLinkContext: UnsafeMutableRawPointer?) -> CVReturn {
unsafeBitCast(displayLinkContext, to: MetalScreenSaverView.self).animateOneFrame()
return kCVReturnSuccess
}
var link: CVDisplayLink?
let screensID = UInt32(window.screen!.deviceDescription["NSScreenNumber"] as! Int)
CVDisplayLinkCreateWithCGDisplay(screensID, &link)
CVDisplayLinkSetOutputCallback(link!, displayLinkOutputCallback, UnsafeMutableRawPointer(Unmanaged.passUnretained(self).toOpaque()))
return link!
}
And later I'm starting the link by calling CVDisplayLinkStart with the link as the argument. The full code, if you are interested can be found at: https://github.com/thoughtworks/dancing-glyphs/blob/master/Library/MetalScreenSaverView.swift
Any ideas? Can I tell OS X somehow to ensure that the timer is fired on every screen refresh? Is this an issue with Metal? I've seen games and screen savers that run fine on battery, but I assume the use OpenGL.
If I'm reading your code correctly, you're assuming that CVDisplayLink is called at some unchangeable interval. That's not promised at all. The system is absolutely free to modify the refresh interval or drop frames. All real-time systems must include the ability to drop frames. That's the heart of what it means to be "real-time."
You're passed the "currently displayed" time and the "target output" time. You're supposed to use those to compute the correct frame for the target output. I don't see you making use of the output time in your code.
I am creating a Cocoa application in Xcode 6, and the application uses an OpenGLView. The draw method in my extension of NSOpenGLView is getting called repeatedly, but I am not sure at which rate it is being called or a way to set the rate.
Is there a default "framerate" for NSOpenGLView, and is there a way to change it?
Apple has a technote describing how to drive an OpenGL rendering loop. The answer is to use a CoreVideo display link (CVDisplayLink), it will call a callback during the blanking interval.
Generally in any window system, the window is not redrawn on a periodic schedule; it only happens in response to events that cause a "damaged" or "dirty" state.
The number of things that cause this "damaged" state has gotten a lot smaller in recent years thanks to compositing window managers (OS X uses one such window manager). It used to happen whenever a window moved over top it, but in modern window managers it only happens during/after a resize event or when the window is moved.
As you would expect, Cocoa's documentation says the same thing:
- update
Called by Cocoa when the view’s window moves or when the view itself moves or is resized.
Owen Taylor writes an excellent blog; this diagram illustrates what may happen:
If a compositor isn’t redrawing immediately when it receives damage from a client, but is waiting a bit for more damage, then it’s possible it might wait too long and miss the vertical reblank entirely. Then the frame rate could drop way down, even if there was plenty of CPU and GPU available.
According to this article by Microsoft the screen refresh rate set by the user can be (and is mostly) a fractional number. The user sets 59Hz but the screen runs according to the on screen display at 60Hz, but in reality it's 59.94Hz. What I need for a extremely smooth animation is the 59.94Hz.
Using IDirect3DDevice9::GetDisplayMode I only get an int value which cannot by definition represent the real timing (same goes for EnumDisplaySettings). I encounter a visible stutter about every second because it reports the rounded/truncated 59. If I manually correct the reported timing in my application to 59.94 it runs smooth.
Anybody knows how I can retrieve the real screen refresh rate?
My current workaround is mapping 60Hz and 59Hz both to constant 59.94Hz but that's not satisfying.
If you are targeting Windows Vista or later, the answer depends on the mode in which your app is running.
If it is a windowed app (or windowed full-screen), refresh rate is controlled via the Desktop Window Manager (DWM) according to user settings and other factors. Use DwmGetCompositionTimingInfo and look at DWM_TIMING_INFO::rateRefresh to get the monitor refresh rate.
If the app is true full-screen, then the full-screen swap chain you create overrides the system default. However, your selected refresh rate (DXGI_SWAP_CHAIN_FULLSCREEN_DESC::RefreshRate) should match one of the monitor-supported refresh rates. You can get the list of supported refresh rates using IDXGIOutput::GetDisplayModeList. Here's an example of how to do so:
UINT numModes = 0;
dxgiOutput->GetDisplayModeList(DXGI_FORMAT_B8G8R8A8_UNORM, 0, &numModes, NULL);
DXGI_MODE_DESC* modes = new DXGI_MODE_DESC[numModes];
dxgiOutput->GetDisplayModeList(DXGI_FORMAT_B8G8R8A8_UNORM, 0, &numModes, modes);
// see modes[i].RefreshRate
In any case, you shouldn't see glitching if you're triple-buffered. You should just present as fast as you can and the OS will present on time. If you combine triple-buffering with custom managed frame timing, you're guaranteed to not actually get triple-buffering, and you'll get glitches any time there's drift in the vblank phase (which happens gradually even if you have a perfect value for refresh rate). If you want to stick with triple-buffering, just present as fast as you can and let the OS take care of presentation timing. If you're using your own timing to drive Present()s (for example, to get low-latency response), you should throw in a call to IDXGIOutput::WaitForVBlank on another thread to help synchronize frame timings. If you end up doing that, you should also use IDXGISwapChain::GetFrameStatistics to make sure you recover from any spurious glitches, otherwise you'll end up a frame behind.
Good luck!
I'm making an app for OS X 10.7 and later that plays video. Any document can be taken full-screen using the standard full-screen commands.
I'd like to forestall the automatic screen dim and display sleep as long as any document in my app is playing.
Ideally, the end (or pausing) of all playing videos should commence the full display sleep timer—a 3-minute display sleep delay shouldn't run out 1 minute and 37 seconds after the last video ends simply because something was checking or disrupting the timer every 3 minutes.
I also don't want to disable display sleep outright. If my program crashes or is force quit or the power goes out, the user's display sleep settings should remain untouched.
What's the best way to ensure that playback is not considered “idle”, but that once playback finishes, display sleep after idle works correctly?
Take a power assertion during playback with IOPMAssertionCreateWithName(), and release it when done. Power assertions handle unexpected process termination correctly:
Assertions should be released with IOPMAssertionRelease. However, even if not properly released, assertions will be automatically released when the process exits, dies, or crashes. A crashed process will not prevent idle sleep indefinitely.
I am working on a short animated story, which has a scrubbable timeline and chapter headings. I used TimelineMax for sequencing it. For the most part, it is working fine. I am seeing some strange behavior that pop up, though: sprites disappear, functions stop responding to user input, seams of the sprites become transparent -- all small issues but pretty hard to nail down because they happen in Mac only.
So I am wondering what is wrong with Flash, and why it misbehaves on a Mac?
I frequently work on the same projects on Windows at work and then my Mac at home. I also see some difference on the Mac compared to Windows. I find that various Flash Player versions for the Mac are generally slower than the Windows players, and I've seen some odd behavior on the Mac that is not happening on Windows.
In most cases I've narrowed this down to AS3's garbage collection. Garbage collection happens when the player determines that an object no longer has a reference in the movie, so it removes that object to free up memory. Let's say you have a class method like this:
function myTweenFunction():void {
var myTween:Tween = new Tween(myDisplayObject, 'x', Strong.easeInOut, 0, 500, 10, true);
myTween.addEventListener(TweenEvent.MOTION_FINISH, onMyTweenDone);
}
The method above will tween myDisplayObject's x value from 0 to 500 over the course of 10 seconds. When that tween is done, it should fire the onMyTweenDone method (not shown). However, myTween was created inside myTweenFunction so it only exists in the scope of myTweenFunction. When myTweenFunction is done, the myTween object is no longer referenced by any object in the movie so it becomes a candidate for garbage collection. You will start to see the tween, but at some point it will stop before it gets to 500 and the finish event will not fire. This means that myTween has been destroyed. To fix this problem, myTween needs to be a member of the class, or just needs to have a reference outside of a class function.
Getting back to the Mac vs. Windows issues, I see that garbage collection on runtime-created objects on the Mac is more apparent than on Windows. Garbage collection happens in the Windows Flash Player, but the tweens and other events may be finishing before garbage collection occurs since the Windows Flash Player has better performance. If the Mac Flash Player is slower (ie. the same tween might take longer), then the garbage collection might happen before the tween is done. Garbage collection does not occur frame-by-frame like an animation; it's a background process that can happen at any time, or not at all if there's enough memory for the Flash Player. Your windows machine might have a pile of RAM and the movie can play fine without the need for garbage collection, so myTween might never go away. If your Mac has less memory, or if you have a ton of apps open at once and the Flash Player memory allotment is limited, then the Flash Player will perform garbage collection more frequently.
I've also used TimelineMax, and there's an auto garbage collection feature that is turned on by default. Try turning that off and test on the Mac.
Ultimately, you should design your project with the assumption that a user may have very limited memory, so your objects need to be created, referenced, and garbage collected accordingly.
I've encountered some rendering issues between plugin versions especially when dealing with transparency, fonts, and embed settings.
If you are doing this in a web browser try playing with the WMODE embed setting and see if your results change.