Anyone know why nextEventMatchingMask:untilDate:inMode:dequeue: take many ms to return an event? - cocoa

In a OS X game calling this was recommended as the way to get keyboard and mouse events.
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
for(;;)
{
NSEvent* event = [NSApp nextEventMatchingMask:NSAnyEventMask untilDate:nil inMode:NSDefaultRunLoopMode dequeue:YES];
if(!event) break;
processevent(event);
...
}
[pool release];
which is called in the games main loop (its cross platform).
Since the most recent versions of OSX 10.5.X this call is suddenly taking many milliseconds per event when there is an event available, and the game' frame rate is affected any time an event appears. If there are multiple events it can take as long as 10ms per frame on a slower mac.
Anyone have a clue as to why this is? Or what I can do alternatively to get events without impacting the game so much?
I tried managing the mouse events myself by getting the mouse position manually and when it gets close to the edge of the screen warping it to the center, but that causes a hitch in the motion (only when the cursor is hidden of course).
Other alternatives might be getting stuff from the HID manager,which we already do for joysticks, but HID is not terribly clear.
The faster the mac the more these hitches from getting events are noticeable.

I think you need to release and re-allocate the autorelease pool inside your loop: as you have the loop all the autoreleased items are just building up and never being flushed.

Offhand, I don't know why the method is taking so long to return. That's worth investigating on the cocoa-dev list or another Apple forum resource. My guess is that managing the events yourself is a bad idea — AppKit is optimized for that, and you can safely bet it will be a lot faster than thrown-together custom code.
However, there is something you can do to keep it from affecting your game: put it in a separate thread. This is a suggested approach to keep your UI from freezing up during a long method call. Apple has published an Introduction to Threading programming guide that can help you get up to speed with the critical concepts you'd need.

I think you have to use an actual value in the untilDate argument, like [NSDate distantFuture] or [NSDate distantPast]. The function will block until an event is available in the former case, while it will return immediately with a nil event in the latter case.
I learned this from the GLFW source code.

Related

Updating contents of multiple NSTextView objects in a single operation

The database application I am working on can have a window with multiple NSTextView elements for displaying and editing data. When the current spot in the database is repositioned, all of the NSTextView objects in the window need to be updated with new contents. This is done with a loop that scans each object and checks to see if it needs to be updated. If it does, the new value is calculated, then updated by using the [NSTextView setString:] method. Here is a simplified version of the code involved.
for formObject in formObjectsInWindow {
NSTextView * objectTextView = [formObject textView];
NSString * updatedValue = [formObject calculateValue];
[objectTextView setString: updatedValue];
}
This works, but if there are a lot of objects, it is somewhat slow. Probably related, the display does not update all at once, you can actually see a "ripple" as the objects are updated, as illustrated in this movie (this movie has been slowed down to 1/4 speed to make the ripple effect more pronounced, but it is definitely visible at full speed).
If you've gotten this far, you might suspect that the calculateValue method is slow, but that isn't the problem. In other places the same code is used and runs at tens of thousands of operations per second. Also, this delay only occurs during update operations, it doesn't occur when the window is first opened, even though the same calculations are required at that time. Here is an example. Notice that when I switch back to the detail view all the NSTextView objects update instantaneously, even though the record changed and all of the values are different.
My suspicion is that the [NSTextView setString:] method is updating the off-screen buffer, then immediately copying that to the on-screen buffer, so that this double buffering is happening over and over again for each item, causing the delay and ripple. If so, I'm sure there must be some way to prevent this so that the screen is only updated at the end after all of the values have been updated. It's probably something simple that I am missing, but I'm afraid I am stumped as to how this is supposed to be done.
By the way, this application does not use layer-backed views, and is not linked against the QuartzCore framework.
I brought up this question with Apple engineers at the WWDC 2018 labs. It turns out the problem is that the setString: method does not mark the NSTextView object as needing display. The system does eventually notice that the text has changed and updates the display, but this happens in an asynchronous process, hence the "ripple" effect. So the workaround is simply to add a call to setNeedsDisplay after calling setString.
[objectTextView setString: updatedValue]
[objectTextView setNeedsDisplay:YES];
Adding this one line of code fixed the problem, no more ripple effect.
I'm told that this is actually a bug, so hopefully this extra line won't be needed in future versions of macOS. As requested, a radar has been filed (41273611 if any Apple engineers are reading this).

What is the recommended frequency for UI changes?

I have a cocoa application window (NSWindow) which position on the screen should be updated frequently (depending on some calculation). As noticed in the documentation, UI changes should be made on the main thread:
void calculationThread()
{
while(true)
{
calculatePosition();
if(positionChanged)
{
dispatch_async(dispatch_get_main_queue(), ^{ setWindowPos(); });
}
}
}
void setWindowPos()
{
[window setFrame:_newFrame display:YES];
}
Now the problem I have is that the window movement is very slow and delayed. After making some profiling I see that the calculation process takes about 40mSec, meaning that I'm queueing up a backlog of UI updates 25 times a second.
I've read here that this might be faster than they can be processed and timer should be used to fire the changes every tenth of a second or so. But, wouldn't it be too slow for the human eye (I mean, in that case the movement wouldn't be delayed but would be lagged causing pretty much the same affect).
I will appreciate some knowledge sharing on this. Actually my main 2 questions are:
Are 25-30 UI updates per second really to much?
If yes, what is the recommended UI changes frequency?
The frequency at which a window can be moved around onscreen without problems will of course depend upon the speed of the user's machine, the video card they have, the size of the window, and probably a bunch of other factors. There is no single good answer to this. However, if you just drag a window around on your screen, you will notice that it can probably be moved very smoothly (unless your machine is very busy or very low on memory or something); I would not expect 25 times per second to produce a problem on a modern Mac. Not even close, in fact.
#RobNapier's points about Core Animation etc. are fine, but overstated I think; there is nothing inherently wrong with changing your UI using a timer or other periodic update if that is what you actually want to do. CoreAnimation is a toolkit for making some types of animation easier; using it is not required, and it is not suited to every problem. Similarly, if you want to make changes that are actually synched to screen refresh then CVDisplayLink is useful, but it doesn't really sound like that's what you want to do.
For your purposes, your basic approach seems fine, although I would suggest adding an NSDate check in order to skip updates if the previous update was less than, say, 1/60th of a second previous. After all, the calculation appears to take 40mSec on your machine, but it might be much faster on some other machine; you want to throttle your drawing to a reasonable rate just to be a good citizen.
So what is the problem, then? I suspect the issue might actually be your call [window setFrame:_newFrame display:YES]. If you look at Apple's docs for that method, they state "When YES the window sends a displayIfNeeded message down its view hierarchy, thus redrawing all views." Each time you call that method, then, you are not only moving your window (which I gather is your intention); you are redrawing all of the contents of the window, too, and that is slow. If you don't need to do that, then that is the overhead you need to eliminate. Call setFrameOrigin: or setFrameTopLeftPoint: instead (which make the semantics clear, that you are moving the window without resizing it or redrawing it), or perhaps just setFrame:display: passing NO instead of YES, and I'm guessing your performance problem will vanish.
If you do in fact need to redraw the window contents every time, then please edit the problem description to reflect that. In that case, the solution will have to involve profiling why your window drawing is slow, and figuring out ways to optimize that, which is an entirely different problem.
As you've discovered, you should never try to drive the UI from a tight loop. You should let the UI drive you. There are three primary tools for that.
For simple problems, AppKit is capable of moving windows around the screen. Just call [NSWindow setFrame:display:animate:]. You can override animationResizeTime: to modify the timing.
In many cases AppKit doesn't give enough control. In those case, the best tool is almost always Core Animation. You should tell the system using Core Animation how you where you want UI elements to wind up, and over what period and path, and let it do the work of getting them there. See the Core Animation Programming Guide for extensive documentation on how to use that. It focuses on animating CALayer, but the techniques are similar for NSWindow. You'll use [NSWindow setAnimations:] to add your animation. Look at the NSAnimatablePropertyContainer protocol (which NSWindow conforms to) for more information. For a simple sample project of animating NSWindow, see Just Say No from CIMGF.
In a few cases, you really do need to update the screen manually at the screen update frequency. I must stress how rare this situation is. In almost all cases, Core Animation is the correct tool. But in those rare case (some kinds of video for instance), you can use a CVDisplayLink to handle this. That will call you each time the screen would like to refresh, giving you an opportunity to update your content to match.

How check for Command-Period in a tight loop?

I'm implementing a scripting language where the user might be causing an endless loop by accident. I want to give the user the opportunity to cancel such a runaway loop by holding down the command key while typing the period (".") key.
Currently, once for every line, I check for cancellation with this code:
NSEvent * evt = [[NSApplication sharedApplication] nextEventMatchingMask: NSKeyDownMask untilDate: [NSDate date] inMode: WILDScriptExecutionEventLoopMode dequeue: YES];
if( evt )
{
NSString * theKeys = [evt charactersIgnoringModifiers];
if( (evt.modifierFlags & NSCommandKeyMask) && theKeys.length > 0 && [theKeys characterAtIndex: 0] == '.' )
{
// +++ cancel script execution here.
}
}
The problem with this is that it eats any keyboard events that the user might be typing while the script is running, even though scripts should be able to check for keypresses. Also, it doesn't dequeue the corresponding NSKeyUp events. But if I tell it to dequeue key up events as well, it might dequeue the keyUp for a keypress that was being held before my script started and my application might never find out the key was released.
Also, I would like to not dequeue any events until I know it is actually a cancel event, but there is no separate dequeue call, and it feels unreliable to just assume the frontmost event on a second call will be the same one. And even if it is guaranteed to be the first, that would mean that the user typing an 'a' and then Cmd-. would mean I only ever see the 'a' and never the Cmd-. behind it if I don't dequeue events.
Is there a better option than going to the old Carbon stand-by GetKeys()? Fortunately, that seems to be available in 64 bit.
Also, I'm thinking about adding an NSStatusItem that adds a button to cancel the script to the menu bar or so. But how would I process events in a way that doesn't let the user e.g. select a menu while a script expects to be ruler of the main thread?
Any suggestions? Recommendations?
Using -addLocalMonitorForEventsMatchingMask: as Dave suggests is probably the easiest way to go about this, yes.
I just wanted to add that despite your unreliable feeling, the event queue is really a queue, and events don't change order. It is perfectly safe (and standard practice in event loops) to call -nextEventMatchingMask:inMode:dequeue:NO, examine the event, determine that it is one you want to deal with, and then call -nextEventMatchingMask:inMode:dequeue:YES in order to consume it. Just make sure that your mask and mode are identical between the two calls.
I would suggest using an event monitor. Since you're asking NSApp for events, it would seem that you're running the script in the current process, so you only have to monitor events in your own process (and not globally).
There are several ways to do this (subclassing NSApplication and overriding -sendEvent:, putting in an event tap, etc), but the easiest way to do this would be with a local event monitor:
id eventHandler = [NSEvent addLocalMonitorForEventsMatchingMask:NSKeyDown handler:^(NSEvent *event) {
// check the runloop mode
// check for cmd-.
// abort the script if necessary
return event;
}];
When you're all done monitoring for events, don't forget to unregister your monitor:
[NSEvent removeMonitor:eventHandler];
So there's +[NSEvent modifierFlags] which is intended as a replacement for GetKeys(). It doesn't cover your use case of the period key though, sadly.
The core problem here with the event queue is you want to be able to search it, which isn't something the API exposes. The only workaround to that I can think of is to dequeue all events into an array, checking for a Command-. event, and then re-queue them all using postEvent:atStart:. Not pretty.
Perhaps as an optimisation you could use +[NSEvent modifierFlags] to only check the event queue when the command key is held down, but that sounds open to race conditions to me.
So final suggestion, override -postEvent:atStart: (on either NSApplication or NSWindow) and see if you can fish out the desired info there. I think at worst it could be interesting for debugging.

Correct way to drive Main Loop in Cocoa

I'm writing a game that currently runs in both Windows and Mac OS X. My main game loop looks like this:
while(running)
{
ProcessOSMessages(); // Using Peek/Translate message in Win32
// and nextEventMatchingMask in Cocoa
GameUpdate();
GameRender();
}
Thats obviously simplified a bit, but thats the gist of it. In Windows where I have full control over the application, it works great. Unfortunately Apple has their own way of doing things in Cocoa apps.
When I first tried to implement my main loop in Cocoa, I couldn't figure out where to put it so I created my own NSApplication per this post. I threw my GameFrame() right in my run function and everything worked correctly.
However, I don't feel like its the "right" way to do it. I would like to play nicely within Apple's ecosystem rather than trying to hack a solution that works.
This article from apple describes the old way to do it, with an NSTimer, and the "new" way to do it using CVDisplayLink. I've hooked up the CVDisplayLink version, but it just feels....odd. I don't like the idea of my game being driven by the display rather than the other way around.
Are my only two options to use a CVDisplayLink or overwrite my own NSApplication? Neither one of those solutions feels quite right.
I am curious to see if anyone who has actually done this cares to weigh in, but here is my understanding:
Apple pushes the CVDisplayLink solution over doing a loop on the main thread that uses -nextEventMatchingMask:untilDate:inMode:dequeue: because, I think, it provides better responsiveness for UI controls. This may not be relevant for full-screen games. (Note: You don't need to replace NSApplication to use that form of game loop.) I think the main potential issue with using CVDisplayLink is that it will only run one frame in advance and it does this determination early, which is even stronger than vertical sync. On the plus side, it might improve latency.
Other solutions include decoupling rendering from game logic and running game logic periodically on the main thread and rendering on the CVDisplayLink thread. I would probably only recommend this, however, if you run into issues with the game-driven-by-display paradigm.
You don't necessarily have to make your own NSApplication based class or use CVDisplayLink to get around the fact that an app's runloop is hidden from you in Cocoa.
You could just create a thread and have your run loop in there instead.
For what it's worth though, I just use CVDisplayLink.
I'm sticking something up here to revive this question...mainly out of portability. I found from studying the OLC Pixel Game Engine, that it works with a do{}while loop and std::chrono to check the timing of the frame to calculate fElapsed Time. Below is some code I wrote to do the same thing. It also adds a makeup portion, to govern the framerate from shooting above a certain value, in this case, 60 FPS.
c++ code
int maxSpeedMicros = 16700;
float fTimingBelt; //used to calculate fElapsedTime for internal calls.
std::chrono::steady_clock::time_point timingBelt[2];
bool engineRunning = false; //always have it true, until the engine stops.
bool isPaused = false;
do {
timingBelt[1] = std::chrono::steady_clock::now();
fTimingBelt = std::chrono::duration_cast<std::chrono::microseconds>(timingBelt[1] - timingBelt[0]).count() * 0.000001;
if (isPaused) {
do {
std::this_thread::sleep_for (std::chrono::milliseconds(100));
timingBelt[1] = std::chrono::steady_clock::now();
} while (isPaused);
}
timingBelt[0] = std::chrono::steady_clock::now();
// do updating stuff here.
timingBelt[1] = std::chrono::steady_clock::now();
int frameMakeup = std::chrono::duration_cast<std::chrono::microseconds>(timingBelt[1] - timingBelt[0]).count();
if (frameMakeup < maxSpeedMicros) {
int micros = maxSpeedMicros - frameMakeup;
std::this_thread::sleep_for (std::chrono::microseconds(micros));
}
} while (engineRunning);
However, that code was in direct conflict with Cocoa's event driven model.
Custom main application loop in cocoa
So as a bandaid, I commented out the whole loop, and created a new method that runs one iteration of the loop. I then implemented this in my AppDelegate:
Objective C Code
- (void)applicationDidFinishLaunching:(NSNotification *)notification {
engine->resetTimer();
[NSTimer scheduledTimerWithTimeInterval:0.016666666667 target:self selector:#selector(engineLoop) userInfo:nil repeats:YES];
}
-(void) engineLoop { //Let's handle this by the engine object. That's too complicated!
engine->updateState();
[glView update]; //Since the engine is doing all of its drawing to a GLView
[[glView openGLContext] flushBuffer];
}
Still to do is adjust the tolerance of the timer object. Apple Developer documentation states that if a timer object misses the next window, it will wait for the next frame time. However, a tolerance allows it to shift the timing of future events to make smoother framerate transitions and better use of CPU power.
So at this point I am open to suggestions and input about what others have done to make more portable code. I am planning on a boolean argument in the constructor of the engine named "eventDriven" and if false, will start its own game loop thread, then split out the top event loop to call an "engineUpdate" method that handles all of the code that can be event driven. Then in the case of building on an event driven system, the delegate can just construct the engine with a engineUpdate = TRUE and have their events drive the gameUpdate.
Has anyone done this? and if so, how does it perform cross platform?

Refresh a NSOpenGLView within a loop without letting go of the main runloop in Cocoa

I am building an Cocoa/OpenGL app, for periods of about 2 second at a time, I need to control every video frame as well as writing to a digital IO device.
If after I make the openGL calls I let go of the main thread (like if I make the openGL calls inside a timer fire-method with an interval of like 0.01 Sec) openGLview is refreshed with every call to glFinish().
But If I instead keep the main thread busy like in a 2 second long while loop, openGl calls won't work (surprisingly the first call to glFinish() would work but the rest won't).
The documentation says that glFinish should block the thread until the gl commands are executed.
Can anybody please help me understand what is going on here or provide a solution to this problem.
To make it clear, I want to present 200 frames one after another without missing a frame and marking each frame refresh by writing to a digital IO port (I don't have a problem with this) all on Snow Leopard.
This is not quite my department - pretty vanilla NSOpenGLView user myself - but from the Mac OpenGL docs it looks like you might want to use a CVDisplayLink (Q&A1385) for this. Even if that won't do it, the other stuff there should probably help.
EDIT
I've only done some basic testing on this, but it looks like you can do what you want as long as you first set the correct OpenGL context and then swap buffers after each frame (assuming you're using a double buffered context):
// inside an NSOpenGLView subclass, somewhere outside the usual drawing loop
- (void) drawMultipleFrames
{
// it might be advisable to also do a [self lockFocus] here,
// although it seems to work without that in my simple tests
[[self openGLContext] makeCurrentContext];
// ... set up common OpenGL state ...
for ( i = 0; i < LOTS_OF_FRAMES; ++i )
{
// ... draw your frame ...
glFinish();
glSwapAPPLE();
}
// unlockFocus here if locked earlier
}
I previously tried using [[self openGLContext] flushBuffer] at the end of each frame instead -- that doesn't need glSwapAPPLE but doesn't block like glFinish so you might get frames trampling over one another. This seems to work OK with other apps, runs in the background etc, but of course YMMV.

Resources