If I inspected the stamps for callbacks from the SFSpeechRecognizer recognitionTask callback (on macOS):
recognitionTask = speechRecognizer.recognitionTask( with: recognitionRequest )
{ result, error in
// if more than two seconds elapsed since the last update, we send a notification
NSLog( "speechRecognizer.recognitionTask callback" )
:
... I observe:
:
2019-11-08 14:51:00.35 ... speechRecognizer.recognitionTask callback
2019-11-08 14:51:00.45 ... speechRecognizer.recognitionTask callback
2019-11-08 14:51:32.31 ... speechRecognizer.recognitionTask callback
i.e. There is an additional unwanted callback approximately 30 seconds after my last utterance.
result is nil for this last callback.
The fact that it's close to 30 seconds suggests to me it is representing a max-timeout.
I'm not expecting a time out, because I have manually shutdown my session (at around the 5s mark, by clicking a button):
#objc
func stopRecording()
{
print( "stopRecording()" )
// Instructs the task to stop accepting new audio (e.g. stop recording) but complete processing on audio already buffered.
// This has no effect on URL-based recognition requests, which effectively buffer the entire file immediately.
recognitionTask?.finish()
// Indicate that the audio source is finished and no more audio will be appended
recognitionRequest?.endAudio()
//self.recognitionRequest = nil
audioEngine.stop()
audioEngine.inputNode.removeTap( onBus: 0 )
//recognitionTask?.cancel()
//self.recognitionTask = nil
self.timer?.invalidate()
print( "stopRecording() DONE" )
}
There's a lot of commented out code, as it seems to me there is some process I'm failing to shut down, but I can't figure it out.
Complete code is here.
Can anyone see what's going wrong?
NOTE: in this solution, there is still one unwanted callback immediately after the destruction sequence is initiated. Hence I'm not going to accept the answer yet. Maybe there exists a more elegant solution to this problem, or someone may shed light on observed phenomena.
The following code does the job:
is_listening = true
recognitionTask = speechRecognizer.recognitionTask( with: recognitionRequest )
{ result, error in
if !self.is_listening {
NSLog( "IGNORED: speechRecognizer.recognitionTask callback" )
return
}
// if more than two seconds elapsed since the last update, we send a notification
self.timer?.invalidate()
NSLog( "speechRecognizer.recognitionTask callback" )
self.timer = Timer.scheduledTimer(withTimeInterval: 2.0, repeats: false) { _ in
if !self.is_listening {
print( "IGNORED: Inactivity (timer) Callback" )
return
}
print( "Inactivity (timer) Callback" )
self.timer?.invalidate()
NotificationCenter.default.post( name: Dictation.notification_paused, object: nil )
//self.stopRecording()
}
:
}
#objc
func stopRecording()
{
print( "stopRecording()" )
is_listening = false
audioEngine.stop()
audioEngine.inputNode.removeTap(onBus: 0)
audioEngine.inputNode.reset()
recognitionRequest?.endAudio()
recognitionRequest = nil
timer?.invalidate()
timer = nil;
recognitionTask?.cancel()
recognitionTask = nil
}
The exact sequence is taken from https://github.com/2bbb/ofxSpeechRecognizer/blob/master/src/ofxSpeechRecognizer.mm -- I'm not sure to what extent it is order sensitive.
For starters, that destruction sequence seems to eliminate the 30s timeout? callback.
However the closures still get hit after stopListening.
To deal with this I've created an is_listening flag.
I have a hunch Apple should maybe have internalised this logic within the framework, but meh it works, I am a happy bunny.
Related
I follow the reference document on Coroutine (https://kotlinlang.org/docs/reference/coroutines/cancellation-and-timeouts.html) and I test in the same time the example on Android Studio (a good boy).
But I am very confuse with cancelAndJoin() method.
If I replace, in the example code, the "cancelAndJoin" with "join", there is no difference in logs.
Here is the code :
fun main() = runBlocking {
val startTime = System.currentTimeMillis()
val job = launch(Dispatchers.Default) {
var nextPrintTime = startTime
var i = 0
while (i < 5) { // computation loop, just wastes CPU
// print a message twice a second
if (System.currentTimeMillis() >= nextPrintTime) {
println("job: I'm sleeping ${i++} ...")
nextPrintTime += 500L
}
}
}
delay(1300L) // delay a bit
println("main: I'm tired of waiting!")
job.cancelAndJoin() // cancels the job and waits for its completion
println("main: Now I can quit.")
}
and in the 2 cases (with join or cancelAndJoin) the logs are :
job: I'm sleeping 0 ...
job: I'm sleeping 1 ...
job: I'm sleeping 2 ...
main: I'm tired of waiting!
job: I'm sleeping 3 ...
job: I'm sleeping 4 ...
main: Now I can quit.
Does anybody can explain what is the difference with the 2 methods?
This is not clear because cancel is "stopping the job", join is "waiting for completion", but the two together??? We "stop" and "wait"???
Thanks by advance :)
This is not clear because cancel is "stopping the job", join is
"waiting for completion", but the two together??? We "stop" and
"wait"?
Just because we wrote job.cancel() doesn't mean that job will cancel immediately.
When we write job.cancel() the job enters a transient cancelling state. This job is yet to enter the final cancelled state and only after the job enters the cancelled state it can be considered completely cancelled.
Observe how the state transition occurs,
When we call job.cancel() the cancellation procedure(transition from completing to cancelling) starts for the job. But we still need to wait, as the job is truly cancelled only when it reaches the cancelled state. Hence we write job.cancelAndJoin(), where cancel() starts the cancellation procedure for job and join() makes sure to wait until the job() has reached the cancelled state.
Make sure to check out this excellent article
Note: The image is shamelessly copied from the linked article itself
Ok I've just realize that my question was stupid!
cancelAndJoin() is equal to do cancel() and then join().
The code demonstrate that the job can not be cancelled if we don't check if it is active or not. And to do so, we must use "isActive".
So like this:
val startTime = System.currentTimeMillis()
val job = launch(Dispatchers.Default) {
var nextPrintTime = startTime
var i = 0
while (isActive) { // cancellable computation loop
// print a message twice a second
if (System.currentTimeMillis() >= nextPrintTime) {
println("job: I'm sleeping ${i++} ...")
nextPrintTime += 500L
}
}
}
delay(1300L) // delay a bit
println("main: I'm tired of waiting!")
job.cancelAndJoin() // cancels the job and waits for its completion
println("main: Now I can quit.")
I wrote a small app which calls every few seconds for checking folders. I run this app for appr. one week. Now I saw that it had occupied 32 GB of my physical 8 GB RAM. So system forced me to stop it.
So it seems that very slowly the app eats up memory. I tried Instruments. With activity monitor the only slowly growing process is "DTServiceHub". Is this something I have to watch at?
For debugging I write some information with print to standard output. Is this information dropped, because app is Cocoa app or stored somewhere till termination of the app? In this case I have to remove all these print-Statements.
Some code for looping:
func startUpdating() {
running = true
timeInterval = 5
run.isEnabled = false
setTimer()
}
func setTimer() {
timer = Timer(timeInterval: timeInterval, target: self, selector: #selector(ViewController.update), userInfo: nil, repeats: false)
RunLoop.current.add(timer, forMode: RunLoopMode.commonModes)
}
func update() {
writeLOG("update -Start-", level: 0x2)
...
setTimer()
}
There are several problems with your code. You should not create a Timer and then add it to a run loop every time you set it, because the run loop will retain that Timer and will never release it. Even worse is when you try to do that every time you update. Just create the timer once. If you don't need it anymore you have to invalidate it to remove it from the run loop and allow its release. If you need to adjust it just set the next fire date.
Here is an example that was tested in a playground:
import Cocoa
class MyTimer
{
var fireTime = 10.0
var timer:Timer
init()
{
self.timer = Timer.scheduledTimer(timeInterval: fireTime, target: self, selector: #selector(update), userInfo: nil, repeats: true)
}
deinit
{
// it will remove the timer from the run loop and it will enable its release
timer.invalidate()
}
#objc func update()
{
print("update")
let calendar = Calendar.current
let date = Date()
if let nextFireDate = calendar.date(byAdding: .second, value: Int(fireTime), to: date)
{
print(date)
timer.fireDate = nextFireDate
}
}
}
let timer = MyTimer()
CFRunLoopRun()
The problem is that your app loops forever and thus never drains the auto release pool. You should never write an app like that. If you need to check something periodically, use an NSTimer. If you need to hear about changes in a folder, use NSWorkspace or kqueue or similar. In other words, use callbacks. Never just loop.
let dispatchGroup = dispatch_group_create()
let now = DISPATCH_TIME_NOW
for i in 0..<1000 {
dispatch_group_enter(dispatchGroup)
// Do some async tasks
let delay = dispatch_time(now, Int64(Double(i) * 0.1 * Double(NSEC_PER_SEC)))
dispatch_after(delay, dispatch_get_main_queue(), {
print(i)
dispatch_group_leave(dispatchGroup)
})
}
The print statement can print first 15-20 numbers smoothly, however, when i goes larger, the print statement prints stuff in a sluggish way. I had more complicated logic inside dispatch_after and I noticed the processing was very sluggish, that's why I wrote this test.
Is there a buffer size or other properties that I can configure? It seems dispatch_get_main_queue() doesn't work well with bigger number of async tasks.
Thanks in advance!
The problem isn't dispatch_get_main_queue(). (You'll notice the same behavior if you use a different queue.) The problem rests in dispatch_after().
When you use dispatch_after, it creates a dispatch timer with a leeway of 10% of the start/when. See the Apple github libdispatch source. The net effect is that when these timers (start ± 10% leeway) overlap, it may start coalescing them. When they're coalesced, they'll appear to fire in a "clumped" manner, a bunch of them firing immediately right after another and then a little delay before it gets to the next bunch.
There are a couple of solutions, all entailing the retirement of the series of dispatch_after calls:
You can build timers manually, forcing DispatchSource.TimerFlag.strict to disable coalescing:
let group = DispatchGroup()
let queue = DispatchQueue.main
let start = CACurrentMediaTime()
os_log("start")
for i in 0 ..< 1000 {
group.enter()
let timer = DispatchSource.makeTimerSource(flags: .strict, queue: queue) // use `.strict` to avoid coalescing
timer.setEventHandler {
timer.cancel() // reference timer so it has strong reference until the handler is called
os_log("%d", i)
group.leave()
}
timer.schedule(deadline: .now() + Double(i) * 0.1)
timer.resume()
}
group.notify(queue: .main) {
let elapsed = CACurrentMediaTime() - start
os_log("all done %.1f", elapsed)
}
Personally, I dislike that reference to timer inside the closure, but you need to keep some strong reference to it until the timer fires, and GCD timers release the block (avoiding strong reference cycle) when the timer is canceled/finishes. This is inelegant solution, IMHO.
It is more efficient to just schedule single repeating timer that fires every 0.1 seconds:
var timer: DispatchSourceTimer? // note this is property to make sure we keep strong reference
func startTimer() {
let queue = DispatchQueue.main
let start = CACurrentMediaTime()
var counter = 0
// Do some async tasks
timer = DispatchSource.makeTimerSource(flags: .strict, queue: queue)
timer!.setEventHandler { [weak self] in
guard counter < 1000 else {
self?.timer?.cancel()
self?.timer = nil
let elapsed = CACurrentMediaTime() - start
os_log("all done %.1f", elapsed)
return
}
os_log("%d", counter)
counter += 1
}
timer!.schedule(deadline: .now(), repeating: 0.05)
timer!.resume()
}
This not only solves the coalescing problem, but it also is more efficient.
For Swift 2.3 rendition, see previous version of this answer.
Below I've put the source to CWnd::RunModal, which is the message loop run when you call CDialog::DoModal - it takes over as a nested message loop until the dialog is ended.
Note that with a couple of special case exception ShowWindow is only called when the message queue is idle.
This is causing a dialog not to appear for many seconds in some cases in our application when DoModal is called. If I debug into the code and put breakpoints, I see the phase 1 loop is not reached until this time. However if I create the same dialog modelessly (call Create then ShowWindow it appears instantly) - but this would be an awkward change to make just to fix a bug without understanding it well.
Is there a way to avoid this problem? Perhaps I can call ShowWindow explicitly at some point for instance or post a message to trigger the idle behaviour? I read "Old New Thing - Modality" which was very informative but didn't answer this question and I can only find it rarely mentioned on the web, without successful resolution.
wincore.cpp: CWnd::RunModalLoop
int CWnd::RunModalLoop(DWORD dwFlags)
{
ASSERT(::IsWindow(m_hWnd)); // window must be created
ASSERT(!(m_nFlags & WF_MODALLOOP)); // window must not already be in modal state
// for tracking the idle time state
BOOL bIdle = TRUE;
LONG lIdleCount = 0;
BOOL bShowIdle = (dwFlags & MLF_SHOWONIDLE) && !(GetStyle() & WS_VISIBLE);
HWND hWndParent = ::GetParent(m_hWnd);
m_nFlags |= (WF_MODALLOOP|WF_CONTINUEMODAL);
MSG *pMsg = AfxGetCurrentMessage();
// acquire and dispatch messages until the modal state is done
for (;;)
{
ASSERT(ContinueModal());
// phase1: check to see if we can do idle work
while (bIdle &&
!::PeekMessage(pMsg, NULL, NULL, NULL, PM_NOREMOVE))
{
ASSERT(ContinueModal());
// show the dialog when the message queue goes idle
if (bShowIdle)
{
ShowWindow(SW_SHOWNORMAL);
UpdateWindow();
bShowIdle = FALSE;
}
// call OnIdle while in bIdle state
if (!(dwFlags & MLF_NOIDLEMSG) && hWndParent != NULL && lIdleCount == 0)
{
// send WM_ENTERIDLE to the parent
::SendMessage(hWndParent, WM_ENTERIDLE, MSGF_DIALOGBOX, (LPARAM)m_hWnd);
}
if ((dwFlags & MLF_NOKICKIDLE) ||
!SendMessage(WM_KICKIDLE, MSGF_DIALOGBOX, lIdleCount++))
{
// stop idle processing next time
bIdle = FALSE;
}
}
// phase2: pump messages while available
do
{
ASSERT(ContinueModal());
// pump message, but quit on WM_QUIT
if (!AfxPumpMessage())
{
AfxPostQuitMessage(0);
return -1;
}
// show the window when certain special messages rec'd
if (bShowIdle &&
(pMsg->message == 0x118 || pMsg->message == WM_SYSKEYDOWN))
{
ShowWindow(SW_SHOWNORMAL);
UpdateWindow();
bShowIdle = FALSE;
}
if (!ContinueModal())
goto ExitModal;
// reset "no idle" state after pumping "normal" message
if (AfxIsIdleMessage(pMsg))
{
bIdle = TRUE;
lIdleCount = 0;
}
} while (::PeekMessage(pMsg, NULL, NULL, NULL, PM_NOREMOVE));
}
ExitModal:
m_nFlags &= ~(WF_MODALLOOP|WF_CONTINUEMODAL);
return m_nModalResult;
}
So to answer my own question, the solution I found was to explicitly call the following two methods:
ShowWindow(SW_SHOWNORMAL);
UpdateWindow();
CWnd::RunModalLoop is supposed to call these, but only when it detects the message queue is empty/idle. If that doesn't happen then the dialog exists and blocks input to other windows, but isn't visible.
From tracking messages I found WM_ACTIVATE was the last message being sent before things got stuck, so I added an OnActivate() handler to my Dialog class.
I found a very similar problem in my application. It only happened when the application was under a heavy load for drawing (the user can open many views, the views show real time data, so they are constantly refreshing, and each view must be drawn independently, and the process of drawing takes a lot of time). So, if under that scenario, the user tries to open any modal dialog (let's say, the "About" dialog, or if the app needs to show any modal dialog like a MessageBox), it "freezes" and the dialog only shows after pressing the ALT key.
Analyzing RunModalLoop() I got to the same conclusion as you did, that is, the first loop (titled "phase1" in the code comments), is never called, since it needs the message queue to be empty, and it never is; the app then falls in the second loop, phase2, from which it never exits, because the call to PeekMessage() at the end of phase2 never returns zero (the message queue is very busy for so many views constantly updating).
Since this code is in wincore.cpp, my solution was to find the closest function that could be overloaded, and lucky me found ContinueModal() which is virtual. Since I already had a class derived from CDialog which I always use as a replacement, I only had to define in that class a BOOL variable m_bShown, and an overload of ContinueModal():
BOOL CMyDialog::ContinueModal()
{
// Begin extra code by Me
if (m_nFlags & WF_CONTINUEMODAL)
{
if (!m_bShown && !(GetStyle() & WS_VISIBLE))
{
ShowWindow(SW_SHOWNORMAL);
UpdateWindow();
m_bShown = TRUE;
}
}
// End extra code
return m_nFlags & WF_CONTINUEMODAL;
}
This causes ContinueModal() to actually show the window if it has never been shown before. I set m_bShown to FALSE both in its declaration and in OnInitDialog(), so it arrives to RunModalLoop() in FALSE, and set it to TRUE immediately after showing the window to disable the little additional code snippet.
This solved the problem for me and supposedly should solve it for anybody. The only doubts remaining are, 1) is RunModalLoop() called from somewhere else within MFC that would conflict with this modification? and 2) Why did Microsoft program RunModalLoop() this weird way leaving this glaring hole there that can cause an app to freeze without any apparent reason?
I recently had the same problem.
I solved by sending the undocumented message 0x118 before calling DoModal(), which is handled in the phase2.
...
PostMessage(0x118, 0, 0);
return CDialog::DoModal();
I want to implement a scheduler class, which any object can use to schedule timeouts and cancel then if necessary. When a timeout expires, this information will be sent to the timeout setter/owner at that time asynchronously.
So, for this purpose, I have 2 fundamental classes WindowsTimeout and WindowsScheduler.
class WindowsTimeout
{
bool mCancelled;
int mTimerID; // Windows handle to identify the actual timer set.
ITimeoutReceiver* mSetter;
int cancel()
{
mCancelled = true;
if ( timeKillEvent(mTimerID) == SUCCESS) // Line under question # 1
{
delete this; // Timeout instance is self-destroyed.
return 0; // ok. OS Timer resource given back.
}
return 1; // fail. OS Timer resource not given back.
}
WindowsTimeout(ITimeoutReceiver* setter, int timerID)
{
mSetter = setter;
mTimerID = timerID;
}
};
class WindowsScheduler
{
static void CALLBACK timerFunction(UINT uID,UINT uMsg,DWORD dwUser,DWORD dw1,DWORD dw2)
{
WindowsTimeout* timeout = (WindowsTimeout*) uMsg;
if (timeout->mCancelled)
delete timeout;
else
timeout->mDestination->GEN(evTimeout(timeout));
}
WindowsTimeout* schedule(ITimeoutReceiver* setter, TimeUnit t)
{
int timerID = timeSetEvent(...);
if (timerID == SUCCESS)
{
return WindowsTimeout(setter, timerID);
}
return 0;
}
};
My questions are:
Q.1. When a WindowsScheduler::timerFunction() call is made, this call is performed in which context ? It is simply a callback function and I think, it is performed by the OS context, right ? If it is so, does this calling pre-empt any other tasks already running ? I mean do callbacks have higher priority than any other user-task ?
Q.2. When a timeout setter wants to cancel its timeout, it calls WindowsTimeout::cancel().
However, there is always a possibility that timerFunction static call to be callbacked by OS, pre-empting the cancel operation, for example, just after mCancelled = true statement. In such a case, the timeout instance will be deleted by the callback function.
When the pre-empted cancel() function comes again, after the callback function completes execution, will try to access an attribute of the deleted instance (mTimerID), as you can see on the line : "Line under question # 1" in the code.
How can I avoid such a case ?
Please note that, this question is an improved version of the previos one of my own here:
Windows multimedia timer with callback argument
Q1 - I believe it gets called within a thread allocated by the timer API. I'm not sure, but I wouldn't be surprised if the thread ran at a very high priority. (In Windows, that doesn't necessarily mean it will completely preempt other threads, it just means it will get more cycles than other threads).
Q2 - I started to sketch out a solution for this, but then realized it was a bit harder than I thought. Personally, I would maintain a hash table that maps timerIDs to your WindowsTimeout object instances. The hash table could be a simple std::map instance that's guarded by a critical section. When the timer callback occurs, it enters the critical section and tries to obtain the WindowsTimer instance pointer, and then flags the WindowsTimer instance as having been executed, exits the critical section, and then actually executes the callback. In the event that the hash table doesn't contain the WindowsTimer instance, it means the caller has already removed it. Be very careful here.
One subtle bug in your own code above:
WindowsTimeout* schedule(ITimeoutReceiver* setter, TimeUnit t)
{
int timerID = timeSetEvent(...);
if (timerID == SUCCESS)
{
return WindowsTimeout(setter, timerID);
}
return 0;
}
};
In your schedule method, it's entirely possible that the callback scheduled by timeSetEvent will return BEFORE you can create an instance of WindowsTimeout.