I wrote a small app which calls every few seconds for checking folders. I run this app for appr. one week. Now I saw that it had occupied 32 GB of my physical 8 GB RAM. So system forced me to stop it.
So it seems that very slowly the app eats up memory. I tried Instruments. With activity monitor the only slowly growing process is "DTServiceHub". Is this something I have to watch at?
For debugging I write some information with print to standard output. Is this information dropped, because app is Cocoa app or stored somewhere till termination of the app? In this case I have to remove all these print-Statements.
Some code for looping:
func startUpdating() {
running = true
timeInterval = 5
run.isEnabled = false
setTimer()
}
func setTimer() {
timer = Timer(timeInterval: timeInterval, target: self, selector: #selector(ViewController.update), userInfo: nil, repeats: false)
RunLoop.current.add(timer, forMode: RunLoopMode.commonModes)
}
func update() {
writeLOG("update -Start-", level: 0x2)
...
setTimer()
}
There are several problems with your code. You should not create a Timer and then add it to a run loop every time you set it, because the run loop will retain that Timer and will never release it. Even worse is when you try to do that every time you update. Just create the timer once. If you don't need it anymore you have to invalidate it to remove it from the run loop and allow its release. If you need to adjust it just set the next fire date.
Here is an example that was tested in a playground:
import Cocoa
class MyTimer
{
var fireTime = 10.0
var timer:Timer
init()
{
self.timer = Timer.scheduledTimer(timeInterval: fireTime, target: self, selector: #selector(update), userInfo: nil, repeats: true)
}
deinit
{
// it will remove the timer from the run loop and it will enable its release
timer.invalidate()
}
#objc func update()
{
print("update")
let calendar = Calendar.current
let date = Date()
if let nextFireDate = calendar.date(byAdding: .second, value: Int(fireTime), to: date)
{
print(date)
timer.fireDate = nextFireDate
}
}
}
let timer = MyTimer()
CFRunLoopRun()
The problem is that your app loops forever and thus never drains the auto release pool. You should never write an app like that. If you need to check something periodically, use an NSTimer. If you need to hear about changes in a folder, use NSWorkspace or kqueue or similar. In other words, use callbacks. Never just loop.
Related
If I inspected the stamps for callbacks from the SFSpeechRecognizer recognitionTask callback (on macOS):
recognitionTask = speechRecognizer.recognitionTask( with: recognitionRequest )
{ result, error in
// if more than two seconds elapsed since the last update, we send a notification
NSLog( "speechRecognizer.recognitionTask callback" )
:
... I observe:
:
2019-11-08 14:51:00.35 ... speechRecognizer.recognitionTask callback
2019-11-08 14:51:00.45 ... speechRecognizer.recognitionTask callback
2019-11-08 14:51:32.31 ... speechRecognizer.recognitionTask callback
i.e. There is an additional unwanted callback approximately 30 seconds after my last utterance.
result is nil for this last callback.
The fact that it's close to 30 seconds suggests to me it is representing a max-timeout.
I'm not expecting a time out, because I have manually shutdown my session (at around the 5s mark, by clicking a button):
#objc
func stopRecording()
{
print( "stopRecording()" )
// Instructs the task to stop accepting new audio (e.g. stop recording) but complete processing on audio already buffered.
// This has no effect on URL-based recognition requests, which effectively buffer the entire file immediately.
recognitionTask?.finish()
// Indicate that the audio source is finished and no more audio will be appended
recognitionRequest?.endAudio()
//self.recognitionRequest = nil
audioEngine.stop()
audioEngine.inputNode.removeTap( onBus: 0 )
//recognitionTask?.cancel()
//self.recognitionTask = nil
self.timer?.invalidate()
print( "stopRecording() DONE" )
}
There's a lot of commented out code, as it seems to me there is some process I'm failing to shut down, but I can't figure it out.
Complete code is here.
Can anyone see what's going wrong?
NOTE: in this solution, there is still one unwanted callback immediately after the destruction sequence is initiated. Hence I'm not going to accept the answer yet. Maybe there exists a more elegant solution to this problem, or someone may shed light on observed phenomena.
The following code does the job:
is_listening = true
recognitionTask = speechRecognizer.recognitionTask( with: recognitionRequest )
{ result, error in
if !self.is_listening {
NSLog( "IGNORED: speechRecognizer.recognitionTask callback" )
return
}
// if more than two seconds elapsed since the last update, we send a notification
self.timer?.invalidate()
NSLog( "speechRecognizer.recognitionTask callback" )
self.timer = Timer.scheduledTimer(withTimeInterval: 2.0, repeats: false) { _ in
if !self.is_listening {
print( "IGNORED: Inactivity (timer) Callback" )
return
}
print( "Inactivity (timer) Callback" )
self.timer?.invalidate()
NotificationCenter.default.post( name: Dictation.notification_paused, object: nil )
//self.stopRecording()
}
:
}
#objc
func stopRecording()
{
print( "stopRecording()" )
is_listening = false
audioEngine.stop()
audioEngine.inputNode.removeTap(onBus: 0)
audioEngine.inputNode.reset()
recognitionRequest?.endAudio()
recognitionRequest = nil
timer?.invalidate()
timer = nil;
recognitionTask?.cancel()
recognitionTask = nil
}
The exact sequence is taken from https://github.com/2bbb/ofxSpeechRecognizer/blob/master/src/ofxSpeechRecognizer.mm -- I'm not sure to what extent it is order sensitive.
For starters, that destruction sequence seems to eliminate the 30s timeout? callback.
However the closures still get hit after stopListening.
To deal with this I've created an is_listening flag.
I have a hunch Apple should maybe have internalised this logic within the framework, but meh it works, I am a happy bunny.
I am writing a MacOS/Cocoa app that monitors a remote log file using a common recipe that launches a Process (formerly NSTask) instance on a background thread and reads stdout of the process via a Pipe (formally a NSPipe) as listed below:
class LogTail {
var process : Process? = nil
func dolog() {
//
// Run ssh fred#foo.org /usr/bin/tail -f /var/log.system.log
// on a background thread and monitor it's stdout.
//
let processQueue = DispatchQueue.global(qos: .background)
processQueue.async {
//
// Create process and associated command.
//
self.process = Process()
process.launchPath = "/usr/bin/ssh"
process.arguments = ["fred#foo.org",
"/usr/bin/tail", "-f",
"/var/log.system.log"]
process.environment = [ ... ]
//
// Create pipe to read stdout of command as data is available
//
let pipe = Pipe()
process.standardOutput = pipe
let outHandle = pipe.fileHandleForReading
outHandle.readabilityHandler = { pipe in
if let string = String(data: pipe.availableData,
encoding: .utf8) {
// write string to NSTextView on main thread
}
}
//
// Launch process and block background thread
// until process complete.
//
process.launch()
process.waitUntilExit()
//
// What do I do here to make sure all related
// threads terminate?
//
outHandle.closeFile() // XXX
outHandle.readabilityHandler = nil // XXX
}
}
Everything works just dandy, but when the process quits (killed via process.terminate) I notice (via Xcode's Debug Navigator and the Console app) that there are multiple threads consuming 180% or more of the CPU!?!
Where is this CPU leak coming from?
I threw in outHandle.closeFile() (see code marked XXX above) and that reduced the CPU usage down to just a few percent but the threads still existed! What am I doing wrong or how do a make sure all the related threads terminate (I prefer graceful terminations i.e., threads body finish executing)!?
Some one posted a similar question almost 5 years ago!
UPDATE:
The documentation for NSFileHandle's readabilityHandler says:
To stop reading the file or socket, set the value of this property to
nil. Doing so cancels the dispatch source and cleans up the file
handle’s structures appropriately.
so setting outHandle.readabilityHandler = nil seems to solve the problem too.
Even though I have seemingly solved the problem, I really don't understand where this massive CPU leak comes from -- very mysterious.
let dispatchGroup = dispatch_group_create()
let now = DISPATCH_TIME_NOW
for i in 0..<1000 {
dispatch_group_enter(dispatchGroup)
// Do some async tasks
let delay = dispatch_time(now, Int64(Double(i) * 0.1 * Double(NSEC_PER_SEC)))
dispatch_after(delay, dispatch_get_main_queue(), {
print(i)
dispatch_group_leave(dispatchGroup)
})
}
The print statement can print first 15-20 numbers smoothly, however, when i goes larger, the print statement prints stuff in a sluggish way. I had more complicated logic inside dispatch_after and I noticed the processing was very sluggish, that's why I wrote this test.
Is there a buffer size or other properties that I can configure? It seems dispatch_get_main_queue() doesn't work well with bigger number of async tasks.
Thanks in advance!
The problem isn't dispatch_get_main_queue(). (You'll notice the same behavior if you use a different queue.) The problem rests in dispatch_after().
When you use dispatch_after, it creates a dispatch timer with a leeway of 10% of the start/when. See the Apple github libdispatch source. The net effect is that when these timers (start ± 10% leeway) overlap, it may start coalescing them. When they're coalesced, they'll appear to fire in a "clumped" manner, a bunch of them firing immediately right after another and then a little delay before it gets to the next bunch.
There are a couple of solutions, all entailing the retirement of the series of dispatch_after calls:
You can build timers manually, forcing DispatchSource.TimerFlag.strict to disable coalescing:
let group = DispatchGroup()
let queue = DispatchQueue.main
let start = CACurrentMediaTime()
os_log("start")
for i in 0 ..< 1000 {
group.enter()
let timer = DispatchSource.makeTimerSource(flags: .strict, queue: queue) // use `.strict` to avoid coalescing
timer.setEventHandler {
timer.cancel() // reference timer so it has strong reference until the handler is called
os_log("%d", i)
group.leave()
}
timer.schedule(deadline: .now() + Double(i) * 0.1)
timer.resume()
}
group.notify(queue: .main) {
let elapsed = CACurrentMediaTime() - start
os_log("all done %.1f", elapsed)
}
Personally, I dislike that reference to timer inside the closure, but you need to keep some strong reference to it until the timer fires, and GCD timers release the block (avoiding strong reference cycle) when the timer is canceled/finishes. This is inelegant solution, IMHO.
It is more efficient to just schedule single repeating timer that fires every 0.1 seconds:
var timer: DispatchSourceTimer? // note this is property to make sure we keep strong reference
func startTimer() {
let queue = DispatchQueue.main
let start = CACurrentMediaTime()
var counter = 0
// Do some async tasks
timer = DispatchSource.makeTimerSource(flags: .strict, queue: queue)
timer!.setEventHandler { [weak self] in
guard counter < 1000 else {
self?.timer?.cancel()
self?.timer = nil
let elapsed = CACurrentMediaTime() - start
os_log("all done %.1f", elapsed)
return
}
os_log("%d", counter)
counter += 1
}
timer!.schedule(deadline: .now(), repeating: 0.05)
timer!.resume()
}
This not only solves the coalescing problem, but it also is more efficient.
For Swift 2.3 rendition, see previous version of this answer.
Consider the following UIViewController implementation:
class ViewController: UIViewController {
var foo:String[] = ["A","b","c"];
override func viewDidLoad() {
super.viewDidLoad()
for (var i=0; i < 1000; i++) {
foo += "bar";
}
}
}
This loop takes around 34 seconds to complete, consumes 100% CPU and uses up 54MB RAM.
If I move the foo declaration into viewDidLoad we get near instant results.
My question: What is causing this?
In the Playground I've tried the following:
Changed Environment to iOS
Built one function that just calls super.viewDidLoad() and another which does the additional array concatenation
It takes approx. 7secs for just calling the super-method and additional 3secs (10secs) with the array-stuff. For me 3 seconds for just 1000 operations seems like there must be any additional Debug-Options enabled. So, according to #nschum, try to make a release build.
Here is my code:
import UIKit
class ViewController: UIViewController
{
var foo:String[] = ["A","b","c"];
func viewDidLoadWithout() {
super.viewDidLoad()
}
func viewDidLoadWith() {
super.viewDidLoad()
for (var i=0; i < 1000; i++) {
foo += "bar";
}
}
}
var time: Int = Int(NSDate.timeIntervalSinceReferenceDate())
var cntrl = ViewController()
cntrl.viewDidLoadWithout()
time -= Int(NSDate.timeIntervalSinceReferenceDate())
time *= (-1) // 7secs
time = Int(NSDate.timeIntervalSinceReferenceDate())
cntrl = ViewController()
cntrl.viewDidLoadWith()
time -= Int(NSDate.timeIntervalSinceReferenceDate())
time *= (-1) // 10secs
Just guessing here.
In the former case you have a property of an objc class and the value is in fact an NSString. At each update of the property not only you have a dynamic dispatch call to update the property, but also another to compute the new string. Each time there is a check for the possibility that there's some Objective C observer that needs notifying. And each intermediate value is allocated on the heap. All the strings also gets eventually released and deallocated.
In the latter case it's a stack value of type String (not an NSString). And there's no one that can see it but this code. The compiler could in fact even pre-compute the size of the final result and allocate it just once, then just do a quick loop to fill it.
Not saying this is exactly what happens. I'm guessing. But the two things are certainly different.
34 seconds however is a lot. There must be something else too.
I am porting an app reading data from a BT device to Mac. On the mac-specific code, I have a class with the delegate methods for the BT callbacks, like -(void) rfcommChannelData:(...)
On that callback, I fill a buffer with the received data. I have a function called from the app:
-(int) m_timedRead:(unsigned char*)buffer length:(unsigned long)numBytes time:(unsigned int)timeout
{
double steps=0.01;
double time = (double)timeout/1000;
bool ready = false;
int read,total=0;
unsigned long restBytes = numBytes;
while(!ready){
unsigned char *ptr = buffer+total;
read = [self m_readRFCOMM:(unsigned char*)ptr length:(unsigned long)restBytes];
total+=read;
if(total>=numBytes){
ready=true; continue;
}
restBytes = numBytes-total;
CFRunLoopRunInMode(kCFRunLoopDefaultMode, .4, false);
time -= steps;
if(time<=0){
ready=true; continue;
}
}
My problem is that this RunLoop makes the whole app un extremely slow. If I don't use default mode, and create my on runloop with a runlooptimer, the callback method rfcommChannelData never gets called. I create my one runloop with the following code:
// CFStringRef myCustomMode = CFSTR("MyCustomMode");
// CFRunLoopTimerRef myTimer;
// myTimer = CFRunLoopTimerCreate(NULL,CFAbsoluteTimeGetCurrent()+1.0,1.0,0,0,foo,NULL);
// CFRunLoopAddTimer(CFRunLoopGetCurrent(), myTimer, myCustomMode);
// CFRunLoopTimerInvalidate(myTimer);
// CFRelease(myTimer);
Any idea why the default RunLoop slows down the whole app, or how to make my own run loop allow callbacks from rfcommchannel being triggered?
Many thanks,
Anton Albajes-Eizagirre
If you're working on the main thread of a GUI app, don't run the run loop internally to your own methods. Install run loop sources (or allow asynchronous APIs of the frameworks install sources on your behalf) and just return to the main event loop. That is, let flow of execution return out of your code and back to your caller. The main event loop runs the run loop of the main thread and, when sources are ready, their callbacks will fire which will probably call your methods.