I have an external tool (written in perl) that does work through a C++ object interface. The launching and tear-down take time so I'd like to have an NSOperationQueue and 4 NSOperation threads that run throughout the life of my app but only do work when I feed them a file to work on. I can round-robin feed them as every file will take roughly the same amount of work.
How can I best do this, or is this asking too much of NSOperation in a way it was not designed to do.
I have tried a normal NSOperation task for each file and the launch & tear-down of the perl tool slows things down and sometimes hangs (not sure why - but seems related to the launch process happening too quickly in sequence).
I'm looking to launch the perl tool once on each of 4 threads and then keep them around for the life of the app as the perl tool will stay open waiting for commands... but those commands have to come from the thread that were launched from.
Related
Suppose you are making a GUI application, and you need to load/parse/calculate a bunch of things before a user can use a certain tool, and you know what you have to do beforehand.
Suddenly, it makes sense to start doing these calculations in the background over a period of time (as opposed to "in one go" on start-up or exactly when it is needed). However, doing too much in the background will slow down the responsiveness of the application.
Are there any standard practices in this kind of approach? Perhaps ways to detect low load on the CPU or the user being idle and execute code in those times? Arguments against this type of approach?
Thanks!
Without knowing your app or your audience, I can't give you specific advice.
My main argument against the approach is that unless you have a high-profile application which will see a lot of use by non-programmers, I wouldn't bother. This sounds like a lot of busy work that could be spent developing or refining features that actually allow people to do new things with your app.
That being said, if there is a reason to do it, there is nothing wrong with lazy-loading data.
The problem with waiting until idle time is that some people have programs like SETI#Home installed on their computer, in which case their computer has little to no idle time. If loading full-throttle kills the responsiveness of your app, you could try injecting sleeps. This is what a lot of video games do when you specify a target frame rate, to avoid pegging the CPU. This would get the data loaded faster, rather than waiting for idle time.
If parts of your app depend on data to work, and the user invokes that part of the app, you will have to abandon the lazy-loading approach and resume your full CPU/disk taxing load. If it takes a long time, or make the app unresponsive, you could display a loading dialog with a progress bar.
If your target audience will tend to have a multicore CPU and if your app startup and background initialization tasks won't contend for the same other resources (e.g. disk IO, network, ...) to the point of creating a new bottleneck, you might want to kick off a background thread to perform the initialization tasks (or even a thread per initialization task if you have several tasks that can run in parallel). That will make more efficient use of a multicore hardware architecture.
You didn't specify your target platform, but it's exceedingly easy to achieve this in .NET and so I have begun doing it in many of my desktop apps.
https://github.com/jmettraux/rufus-scheduler states that:
rufus-scheduler is a Ruby gem for scheduling pieces of code (jobs). It understands running a job AT a certain time, IN a certain time, EVERY x time or simply via a CRON statement.
rufus-scheduler is no replacement for cron/at since it runs inside of Ruby.
so what if it runs inside ruby? can't i access cron using the system command in ruby?
rufus-scheduler is a "in-ruby-process" scheduler. It is not meant as a cron/at replacement at all.
rufus-scheduler was not meant for people not comfortable with cron/at on the command line, it was meant for people willing to schedule stuff directly inside their ruby process (and understanding what it implies).
If rufus-scheduler was meant as a replacement for cron/at, it would provide some kind of persistence for the jobs, but it does not.
Another take on that : http://adam.heroku.com/past/2010/6/30/replace_cron_with_clockwork/
I think rufus-scheduler is for those people who aren't comfortable using the system's crontab, at or batch.
cron does repeating/periodic jobs and at and batch are for one-time jobs because those two commands don't support automatically repeating commands.
So rufus-scheduler is creating the functionality of the other commands, but if you're comfortable at the command-line and with the other commands, it doesn't buy you much in my opinion.
I haven't used it, but did look through the source, and my concern is that it appears rufus-scheduler relies on threads, which mean Ruby will keep your app running in the background, waiting for the appropriate time or interval to run. If the process gets killed, or the machine restarts it looks like the job won't run, which is a major difference compared to the system's commands which will persist across reboots or the app not being in memory.
We use cron a lot at work for jobs; It's an industry standard tool, and every Linux and Mac computer is running cron-scheduled jobs all through the day, though most users don't know it.
My IRC bot sporadically dies sometimes and I have it in a screen. The way to invoke it again is node protobot.js.
I have to always find that screen and reinvoke it. I'd like a faster way.
How can I invoke screen reliably from within a shell script in order to invoke node to revive it?
Your process needs care and feeding? Do not put it in the screen to begin with.
Investigate process monitoring, for instance with Monit, or God, or some other keep-it-alive software. There are tons of alternatives. When you've seen these two, the Net should provide you with pros and cons and other names of software.
In my Cocoa application I need to run a task that uses unstable unfixable code. It takes little input, works independently from the rest of the app, and generates thousands of ObjC objects as a result.
How can I run the unstable part and let it crash without taking down whole application?
Is it possible to fork() Cocoa application? How UI, threads, GC, KVO, runloops are going to behave when forked?
Can I avoid creating standalone executable launched via NSTask?
If I launch separate process, how can I send and receive ObjC object instances? (I'd rather not serialize/unserialize them myself, and I need to keep them after child process ends).
How does OS X handle this problem for Spotlight and Quicklook plugins?
Is it possible to fork() Cocoa application?
Yes, but you pretty much have to exec immediately. Core Foundation will throw an exception if you try to use certain Cocoa methods or CF functions between fork and exec (or without execking at all). You might get away with some things (I was able to ask a window its frame, for example), but nothing is safe.
Launching an NSTask, of course, counts as fork and exec together, averting the problems of skipping or postponing the exec.
How UI, threads, GC, KVO, runloops are going to behave when forked?
UI: Windows (the actual ones on the screen) are not duplicated. Of course, you can't talk to your NSWindow and NSView objects anyway.
Threads: Not carried over to the subprocess. This is not as good as it may sound, as problem cases abound; for one, another thread might have held a lock in the parent, which remains locked in the child even though the thread that held it is absent.
GC: Well, the garbage collector runs on a thread…
KVO: Should be fine, since observation is normally triggered either explicitly or by KVO-supplied wrapper accessors.
Run loops: One per thread, so the main thread's run loop should still exist, but it will die if you return to it.
Can I avoid creating standalone executable launched via NSTask?
Nope.
If I launch separate process, how can I send and receive ObjC object instances?
If you don't exec, you don't.
Otherwise, you can use DO.
(I'd rather not serialize/unserialize them myself, and I need to keep them after child process ends).
Then you'll need to make a copy in the parent process. I don't know whether you can use copyWithZone: here; probably not. I suspect you will have to do some sort of plist- or archive-based serialization/unserialization.
How does OS X handle this problem for Spotlight and Quicklook plugins?
Spotlight has mdworker; Quick Look has something similar.
I use Distributed Objects to communicate between my cocoa program and a separate (unreliable) worker program. I start the worker as a NSTask. Distributed objects are very elegantly put together.
I'm working on a small free Cocoa app that involves some SFTP functionality, specifically working with uploads. The app is nearing completion however I've run into a pretty bad issue with regards to uploading folders that contain a lot of files.
I'm using ConnectionKit to handle the uploads:
CKTransferRecord * record;
record = [connection recursivelyUpload:#"/Users/me/large-folder"
to:#"/remote/directory"];
This works fine for most files and folders. Although in this case #"/Users/me/large-folder" has over 300 files in it. Calling this method spins my CPU up to 100% for about 30 seconds and my application is unresponsive (mac spinning ball). After the 30 seconds my upload is queued and works fine, but this is hardly ideal. Obviously whatever is enumerating these files is causing my app to lock up until it's done.
Not really sure what to do about this. I'm open to just about any solution - even using a different framework, although I've done my research and ConnectionKit seems to be the best of what's out there.
Any ideas?
Use Shark. Start sampling, start downloading, and as soon as the hang ends, stop sampling.
If the output confirms that the problem is in ConnectionKit, you have two choices:
Switch to something else.
Contribute a patch that makes it not hang.
The beauty of open-source is that #2 is possible. It's what I recommend. Then, not only will you have a fast ConnectionKit, but once the maintainers accept your patch, everyone else who uses CK can have one too.
And if Shark reveals that the problem is not in ConnectionKit (rule #2 of profiling: you will be surprised), then you have Shark's guidance on how to fix your app.
Since the problem is almost certainly on the enumeration, you'll likely need to move the enumeration into an asynchronous action. Most likely they're using NSFileManager -enumeratorAtPath: for this. If that's the main problem, then the best solution is likely to move that work onto its own thread. Given the very long time involved, I suspect they're actually reading the files during enumeration. The solution to that is to read the files lazily, right before uploading.
Peter is correct that Shark is helpful, but after being a long-time Shark fan, I've found that Instruments tends to give more useable answers more quickly. You can more easily add a disk I/O and memory allocation track to your CPU Sampler using Instruments.
If you're blocking just one core at 100%, I recommend setting Active Thread to "Main Thread" and setting Sample Perspective to "All Sample Counts." If you're blocking all your cores at 100%, I recommend setting Active Thread to "All Threads" and Sample Perspective to "Running Sample Times."