Timer Queues, immediately terminate a timer? - windows

I'm trying to achieve high frame-per-second on Windows GDI by using Windows Timer Queues. The relevant APIs are CreateTimerQueue, DeleteTimerQueueEx, CreateTimerQueueTimer, and DeleteTimerQueueTimer .
The timer is created using CreateTimerQueueTimer(&m_timer, m_timer_queue, TimerCallback, this, 0, 20, WT_EXECUTEINTIMERTHREAD); to achieve some 50fps of speed. GDI operations (some painting in the backstore, plus InvalidateRect) cannot be asynchronous, therefore I can't choose other flags but WT_EXECUTEINTIMERTHREAD so that no extra sync op is required on the drawing code. The idea is to achieve 50fps when possible, and when it's not, just show each frame at the maximum possible speed.
At the end of the animation (reached a total frame count), DeleteTimerQueueTimer is called to destroy the timer.
The problem is that DeleteTimerQueueTimer doesn't immediately turn off the callings of the callback function. When it's not possible to achieve the 50fps requirement, the timer pumps the call into a queue. Calling DeleteTimerQueueTimer inside the callback function doesn't destroy the queue. As a result, the callback is still being called even though it decided to shutdown the timer.
How do I deal with this problem?
-
On another note, the old timeSetEvent / timeKillEvent multimedia API doesn't seem to have this problem. There are no queues and the calling of the callback function is immediately stopped when I call timeKillEvent. Is it possible to achieve the same behavior with timer queues?

You can pass the WT_EXECUTEONLYONCE flag to the CreateTimerQueueTimer function. This will cause the timer to trigger only once and not periodically.
You can then reschedule the timer with the ChangeTimerQueueTimer method.
To cover the times where your drawing takes too long too complete in the frame, you can add a CriticalSection to the beginning of the TimerHandler method, which will cause the 2nd timer to wait until the first one completes.

If you want to run something at 50fps+, you'd probably do better to actually just have a draw loop which computes the amount of time between frames and scales the animation accordingly. Timers aren't really meant to fire so often. So (and this would probably be in your Idle handler). Like, this pseudocode (ignore lack of error handling):
static longlong last_frame;
while(1) {
longlong current_frame = QueryPerformanceCounter();
long delta = current_frame - last_frame;
// Do drawing here, scale amount to move by how much time has elapsed
last_frame = current_frame;
}

DeleteTimerQueueTimer will cancel the timer provided it has not already been scheduled. (When you use WT_EXECUTEINTIMERTHREAD I believe they are queued as an APC on a thread from a thread pool shared by the timer queues and worker threads. ) If it has already been scheduled (not just running) - it will be run and the DeleteTimerQueueTimer call will block until completion.
If I understand your problem correctly, may I suggest the following?
1. Before calling DeleteTimerQueueTimer - set a flag say abortAllTimers to true.
2. In each timer call back function check to see if abortAllTimers is true. If it is true - then return at once without doing any drawing.
And finally - DeleteTimerQueueTimer should not be called from the timer callback. Instead I would suggest you should call it from any other thread - say the thread you used to start the timers.
Hope this helps.

Related

Kotlin coroutines slow start

I've been attempting to do a bit of performance review on an app I have, it's a back end Kotlin app that just pulls in some data, does a bit of data transformation and dumps it out, nothing too fancy. One thing that caught my eye was the final bit of execution where we dump our final data onto a queue, at first I noticed when we start up the app the final network call takes a very long time at first, sometimes over a second. Normally we run this network call in a coroutine to stop that last call blocking everything but I started trying to time the coroutine and the network call separately and got some odd results, from what I can see the coroutine takes can take forever to launch/complete compared to the network call. It's entirely possible I'm not recording things correctly but this is the general timing approach I have:
val coroutineTime - Instant.now().toEpochMillis()
GlobalScope.launch {
executionTime = measureTimeMillis { <--DO Message Sending -->}
totalTime = Instant.now().toEpochMillis() - coroutineTime
// Log out execution Time and total time
}
Now here what I'll see is something like
- totalTime = ~800ms
- executionTime = ~150ms
These aren't one-offs either, I have multiple of these processes going on at once ( up to 10 threads I think) and the first total times will always take significantly longer than the actual executionTime/network call. Eventually after a new dozen messages the overhead will calm down and these times will become equivalent at about 15ms, but having nearly 700ms overhead on coroutine start up seems insane to me.
Is this normal/expected behavior? I've tested this in a separate app and see similar but less extreme results where the first coroutine will take about 70ms to boot up, I'm struggling to find any other examples of this type of discussion outside of kotlin being used in android development.
As a first note, it's almost never a good idea to use the GlobalScope unless you really know what you're doing. This is why it was marked as delicate API. You should instead use a scope that is appropriately closed (following the lifecycle of whatever component launches this work).
Now, AFAIK, this GlobalScope runs on the default dispatcher, so maybe this is due to a cold start of that default thread pool. Later, it could also be a problem to use this dispatcher for network calls depending on the amount of concurrent coroutines you have. It would be more appropriate to use Disptachers.IO instead for IO bound work (or a custom thread pool).
It still doesn't explain the cold start, but I would first change that before investigating.
This is expected behavior if you use coroutines inappropriately ;-)
My guess is that your message sending is a blocking operation. By default GlobalScope.launch() dispatches coroutines with Dispatchers.Default which is designed to perform CPU-intensive operations, it has a limited number of threads and you should never block when using it. If you do you may run out of threads and coroutines will need to wait until some blocking operations will finish.
If you need to run blocking or IO code, you should use Dispatchers.IO instead:
GlobalScope.launch(Dispatchers.IO) {
I was facing similar issue, I have a function that loads some data from shared prefs, makes some calculations on the data (all this done in Dispatcher.Default), and return the result on Dispatcher.Main. I measured how long it took the Coroutine to actually start executing the block inside Dispatchers.Main.launch { } after calculations are done(time from tag2 to tag3 below), and got about 950ms (!!) Here is the function :
fun someName() {
CoroutineScope(Dispatchers.Default).launch {
val time = System.currentTimeMillis()
//load data and calculations
Log.d("tag2", "load and calculations took " + (System.currentTimeMillis() - time))
CoroutineScope(Dispatchers.Main.immediate).launch {
Log.d("tag3", "reached main thread code " + (System.currentTimeMillis() - time))
//do something
Log.d("tag4", "do something took " + (System.currentTimeMillis() - time))
}
}
}
But then I realized this happens while app launch, and main thread is busy creating all the UI, so even with .immediate it takes time until main thread will get to execute the dispatched code... then I tried to run this function after app already started and waiting, and found that from tag2 to tag 3 takes about 1ms (!!) (with .immediate). So looks like when dispatching something on Coroutine, when thread isn't busy it will start immediately

Xamarin.Forms.Device.StartTimer - Is There a Non-Recurring Version of This? (A Simple Delay)

Xamarin.Forms.Device.StartTimer is a convenient method to repeatedly call some code in a certain interval. This is similar to JavaScript's SetInverval() method. JavaScript also has a method to set a single delay, called SetTimeout() - it delays a certain amount of time, then calls the callback code once. Is there a SetTimeout equivalent for Xamarin.Forms, where I simply want the code to be called in the background after a certain delay?
NOTE: I know I can return false to stop the recurrence, and that's easy enough. It just seems like if you're only intending to call the callback once, it's a little semantically misleading to use this recurring timer mechanism.
StartTimer will do this
While the callback returns true, the timer will keep recurring.
Simply return false to stop the timer
You could start a Task with delay:
async Task DoSomethingOnceWithDelay(TimeSpan delay)
{
await Task.Delay(delay);
await MyTask();
}
Official doc.

What is considered overloading the main thread?

I am displaying information from a data model on a user interface. My current approach to doing so is by means of delegation as follows:
#protocol DataModelDelegate <NSObject>
- (void)updateUIFromDataModel;
#end
I am implementing the delegate method in my controller class as follows, using GCD to push the UI updating to the main thread:
- (void)updateUIFromDataModel {
dispatch_async(dispatch_get_main_queue(), ^{
// Code to update various UI controllers
// ...
// ...
});
}
What I am concerned about is that in some situations, this method can be called very frequently (~1000 times per second, each updating multiple UI objects), which to me feels very much like I am 'spamming' the main thread with commands.
Is this too much to be sending to the main thread? If so does anyone have any ideas on what would be the best way of approaching this?
I have looked into dispatch_apply, but that appears to be more useful when coalescing data, which is not what I am after - I really just want to skip updates if they are too frequent so only a sane amount of updates are sent to the main thread!
I was considering taking a different approach and implementing a timer instead to constantly poll the data, say every 10 ms, however since the data updating tends to be sporadic I feel that it would be wasteful to do so.
Combining both approaches, another option I have considered would be to wait for an update message and respond by setting the timer to poll the data at a set interval, and then disabling the timer if the data appears to have stopped changing. But would this be over-complicating the issue, and would the sane approach be to simply have a constant timer running?
edit: Added an answer below showing the adaptations using a dispatch source
One option is to use a Dispatch Source with type DISPATCH_SOURCE_TYPE_DATA_OR which lets you post events repeatedly and have libdispatch combine them together for you. When you have something to post, you use dispatch_source_merge_data to let it know there's something new to do. Multiple calls to dispatch_source_merge_data will be coalesced together if the target queue (in your case, the main queue) is busy.
I have been experimenting with dispatch sources and got it working as expected now - Here is how I have adapted my class implementation in case it is of use to anyone who comes across this question:
#implementation AppController {
#private
dispatch_source_t _gcdUpdateUI;
}
- (void)awakeFromNib {
// Added the following code to set up the dispatch source event handler:
_gcdUpdateUI = dispatch_source_create(DISPATCH_SOURCE_TYPE_DATA_ADD, 0, 0,
dispatch_get_main_queue());
dispatch_source_set_event_handler(_gcdUpdateUI, ^{
// For each UI element I want to update, pull data from model object:
// For testing purposes - print out a notification:
printf("Data Received. Messages Passed: %ld\n",
dispatch_source_get_data(_gcdUpdateUI));
});
dispatch_resume(_gcdUpdateUI);
}
And now in the delegate method I have removed the call to dispatch_async, and replaced it with the following:
- (void)updateUIFromDataModel {
dispatch_source_merge_data(_gcdUpdateUI, 1);
}
This is working absolutely fine for me. Now Even during the most intense data updating the UI stays perfectly responsive.
Although the printf() output was a very crude way of checking if the coalescing is working, a quick scrolling back up the console output showed me that the majority of the messages print outs had a value 1 (easily 98% of them), however there were the intermittent jumps to around 10-20, reaching a peak value of just over 100 coalesced messages around a time when the model was sending the most update messages.
Thanks again for the help!
If the app beach-balls under heavy load, then you've blocked the main thread for too long and you need to implement a coalescing strategy for UI updates. If the app remains responsive to clicks, and doesn't beach-ball, then you're fine.

MATLAB: flushing event queue with drawnow

The function drawnow
causes figure windows and their children to update, and flushes the system event queue. Any callbacks generated by incoming events (e.g., mouse or key events) are dispatched before drawnow returns.
I have the following script:
clear all;
clc;
t = timer;
set(t, 'Period', 1);
set(t, 'ExecutionMode', 'fixedSpacing');
set(t, 'TimerFcn', #(event, data) disp('Timer rollover!'));
start(t);
while(1)
%# do something interesting
drawnow;
end
With the drawnow in place, the timer event will occur every second. Without it, no callback function occurs because the while loop is "blocking".
My questions:
1) Is there a way to flush the queue without updating figure windows?
2) When we say "flush the event queue", do we mean "execute everything in the event queue", "execute what's next in the queue and drop everything else out of the queue", or something else entirely?
I have multiple callback functions from multiple separate timers happening in the background of my program. Not having one of these callbacks executing is not an option for me. I just wanted to clarify and make sure I'm doing the right thing.
1) Not to my knowledge - at least, I believe the only way to flush the queue is to call drawnow. Depending on what you mean by 'update figure windows', you may be able to prevent drawnow from having an undesirable effect (e.g. by removing data sources before calling drawnow).
2) I can't test this right now, but based on how I've used it before, and the description you gave above, I'm pretty sure it's "execute everything in the queue".
Another thing I'm not sure about is whether you need while 1; drawnow - don't events work as you would expect if you just end the script after start(t)? I thought drawnow was only necessary if you are doing some other stuff e.g. inside the while loop.
If you also place a small pause in the loop, that also frees up some time for the timer. For example pause(0.001). Some examples:
start(t); while(1); end; %No timer events occur
start(t); while(1); pause(0.001); end %Timer events occur
start(t); while(1); drawnow; end %Timer events occur (your original example)
start(t); while(1); pause(0); end %No timer events (just thought I'd check)

How to Use Timers in Windows

What are the various ways that a timer can be set up using the Windows API. What are the pros and cons of each method?
I'm using MS DevStudio's C++.
There are two timer related functions on the Windows system: SetTimer and KillTimer (I know, the names are odd - CreateTimer and DestroyTimer would be more sensible, as in CreateWindow and DestroyWindow, but that is what is available).
SetTimer can function in one of two modes: the timer event can trigger a user defined callback or it can post a message to a window. The format of this function is:
timer_id = SetTimer (window, event_id, interval, callback);
To use a callback:
timer_id = SetTimer (NULL, NULL, interval_in_milliseconds, callback);
To get a WM_TIMER message to a window:
timer_id = SetTimer (window, event_id, interval_in_milliseconds, NULL);
In both cases, the calling thread needs to have a message queue as both variants issue a WM_TIMER message, the default handler calls the callback function.
Depending on the OS you're using the value of interval has upper and lower bounds. See the API documentation for more details.
To release the timer after you're finished with it do the following if you provided a window handle:
KillTimer (window, event_id); // event_id is important!
and if you used a callback:
KillTimer (NULL, timer_id);
A single window can have many timers associated with it, use a different event_id for each timer. Reusing an event_id stops the first instance of the timer without posting the WM_TIMER message.
Pros: fairly easy to use.
Cons: latency between interval end and processing of WM_TIMER message, resolution is large, requires a message processing loop.
Another method for handling timers is to use waitable timer objects. These don't require any message processing, don't use WM_TIMER or callbacks. As such, they're a bit more complex. Understanding the Windows event system will be helpful.
There are three types of timer objects: manual-reset, synchronisation and periodic; and there are four functions for handling the timer objects: CreateWaitableTimer, SetWaitableTimer, CancelWaitableTimer and CloseHandle (there is a fifth, OpenWaitableTimer but that is unlikely to useful to many people). There are also a set of functions required for notification of when a timer expires: WaitForSingleObject, MsgWaitForSingleObject, WaitForMultipleObjects and MsgWaitForMultipleObjects being the most useful.
The usual method for using these timers is:
CreateWaitableTimer (...)
SetWaitableTimer (...)
WaitForSingleObject (...)
CloseHandle (...)
Compare this to SetTimer - the only way to know if a timer has expired is to poll it, either in a loop or with an infinte timeout (i.e. suspend the thread until the timer elapses).
Pros: very flexible, no need to have a message queue.
Cons: hard to use
Usually, look at the API you are going to use, for example MFC, Qt or GTK; they all have timer classes.
If you're not going to use a GUI API, I personally like boost::timer (www.boost.org)
For high resolution timers, use queryperformancecounter

Resources