How to prevent cocoa application from freezing? - cocoa

-(void)test
{
int i;
for (i=0;i < 1000000;i++)
{
//do lengthly operation
}
}
How to prevent its GUI from freezing ?

Bottom line; don't block the main thread and, thus, don't block the main event loop.
Now, you could spawn a thread. But that isn't actually the correct way write concurrent programs on Mac OS X.
Instead, use NSOperation and NSOperationQueue. It is specifically designed to support your concurrent programming needs, it scales well, and NSOperationQueue is tightly integrated into the system such that it will control concurrency based on system resources available (# of cores, CPU load from other applications, etc) more efficiently than any direct use of threads.
See also the Threaded Programming Guide.

I would do the lengthy operation in a separate thread, using NSThread

Related

aio on osx: Is it implemented in the kernel or with user threads? Other Options?

I am working on my small c++ framework and have a file class which should also support async reading and writing. The only solution other than using synchronous file i/o inside some worker threads I found is aio. Anyways I was looking around and read somewhere, that in Linux, aio is not even implemented in the kernel but rather with user threads. Is the same true for OSX? Another concern is aio's functionality of callbacks which has to spawn an extra thread for each callback since you can't assign a certain thread or threadpool to take care of that (signals are not an option for me). So here are the questions resulting from that:
Is aio implemented in the Kernel of osx and thus is most likely better than my own threaded implementation?
Can the callback system -spawning a thread for each callback- become a bottleneck in practice?
If aio is not worth using on osx, are there any other alternatives on unix? in cocoa? in carbon?
Or should I simply emulate async i/o with my own threadpool?
What is your experience on the subject?
You can see exactly how AIO is implemented on OSX right here.
The implementation uses kernel threads, one queue of jobs which each thread pops and execute in a blocking fashion in a priority queue based on each request's priority (at least that's what it looks like at a first glance).
You can configure the number of threads and the size of the queue with sysctl. To see these options and the default values, run sysctl -a | grep aio
kern.aiomax = 90
kern.aioprocmax = 16
kern.aiothreads = 4
In my experience, in order for it to make any sense to use AIO, these limits need to be a lot higher.
As for the callbacks in threads, I don't believe Mac OS X supports that. It only does completion notifications through signals (see source).
You could probably do as good of a job in your own thread pool. One thing you could do better than the current darwin implementation is to sort your read jobs by physical location on the disk (see fcntl and F_LOG2PHYS) which might even give you an edge.
#Moka: Sorry to say that you're wrong on the linux implementation, as of kernel 2.6 there is a kernel implementation of AIO, which comes in libaio (libaio.h)
The implementation that doesn't use Kernel threads but instead uses user threads is POSIX.1 AIO, and it does it that way to make it more portable, as not all unix based OS support completion events at Kernel level.

How to understand asynchronous io in Windows?

1.How to understand asynchronous io in Windows??
2.If I write/read something to the file using asynchronous io :
WriteFile();
ReadFile();
WriteFile();
How many threads does the OS generate to accomplish these task?
Do the 3 task run simultaneously and in multi-threading way
or run one after another just with different order?
3.Can I use multithreading and in each thread using a asynchronous io
to read or write the same file?
1.How to understand asynchronous io in Windows??
Read the Win32 documentation. Search on the web. Don't expect an answer to such a large, broad question here in SO.
2.If I write/read something to the file using asynchronous io :
WriteFile();
ReadFile();
WriteFile();
How many threads does the OS generate to accomplish these task?
I don't think it does. It will re-use existing thread contexts to execute kernel function calls. Basically the OS schedules the work and borrows a thread to do it - which is fine, since the kernel context is always the same.
3.Can I use multithreading and in each thread using a asynchronous io to read or write
the same file?
I believe so, yes. I don't know that the order of execution is guaranteed to match the order of submission, in which case you will obtain unpredictable results if you issue concurrent reads/writes on the same byte ranges.
To your questions:
How many threads does the OS generate
to accomplish these task?
Depends if you are using the windows pools, iocp, etc. Generally you decide.
Do the 3 task run simultaneously and
in multi-threading way or run one
after another just with different
order?
This depends on your architecture. On a single-cored machine, the 3 tasks would run one after another and the order would be os decided. On a multi-cored machine these might run together, depending on how the OS scheduled the threads.
3.Can I use multithreading and in each thread using a asynchronous io to read
or write the same file?
That is out of my knowledge so someone else would need to answer that one.
I suggest getting a copy of Windows via C/C++ as that has a very large chapter on Asynchronous IO.
I guess it depends which operating system you are using. But you shouldnt have to worry about this anyhow, it is transparent and should not affect how you write your code.
If you use the standard read and write in windows, you don't have to care that the system may not write it immediately, unless you are writing on the command-line and are waiting for the user to type some input. The OS is responsible for ensuring that what you write will eventually be written to the hard drive, and will do a much better job that you can do anyway.
If you are working on some weird asynchronous io, then please reformat your question.
I suggest looking for Jeffery Richter's books on Win32 programming. They are very well-written guides for just this sort of thing.
I think he has a newer book(s?) on C#, so watch out that you don't buy the wrong one.

NSThread or pythons' threading module in pyobjc?

I need to do some network bound calls (e.g., fetch a website) and I don't want it to block the UI. Should I be using NSThread's or python's threading module if I am working in pyobjc? I can't find any information on how to choose one over the other. Note, I don't really care about Python's GIL since my tasks are not CPU bound at all.
It will make no difference, you will gain the same behavior with slightly different interfaces. Use whichever fits best into your system.
Learn to love the run loop. Use Cocoa's URL-loading system (or, if you need plain sockets, NSFileHandle) and let it call you when the response (or failure) comes back. Then you don't have to deal with threads at all (the URL-loading system will use a thread for you).
Pretty much the only time to create your own threads in Cocoa is when you have a large task (>0.1 sec) that you can't break up.
(Someone might say NSOperation, but NSOperationQueue is broken and RAOperationQueue doesn't support concurrent operations. Fine if you already have a bunch of NSOperationQueue code or really want to prepare for working NSOperationQueue, but if you need concurrency now, run loop or threads.)
I'm more fond of the native python threading solution since I could join and reference threads around. AFAIK, NSThreads don't support thread joining and cancelling, and you could get a variety of things done with python threads.
Also, it's a bummer that NSThreads can't have multiple arguments, and though there are workarounds for this (like using NSDictionarys and NSArrays), it's still not as elegant and as simple as invoking a thread with arguments laid out in order / corresponding parameters.
But yeah, if the situation demands you to use NSThreads, there shouldn't be any problem at all. Otherwise, it's cool to stick with native python threads.
I have a different suggestion, mainly because python threading is just plain awful because of the GIL (Global Interpreter Lock), especially when you have more than one cpu core. There is a video presentation that goes into this in excruciating detail, but I cannot find the video right now - it was done by a Google employee.
Anyway, you may want to think about using the subprocess module instead of threading (have a helper program that you can execute, or use another binary on the system. Or use NSThread, it should give you more performance than what you can get with CPython threads.

OS X inter thread communication question

I am developing a multi-threaded application in Cocoa. The main thread takes values from the user, and when a button is clicked I invoke a secondary thread in which a long calculation takes place. Now from this thread I have to return the output of every step of the calculation to the main thread. I want to periodically send data from one thread to the other. I can't find any simple example that does this. Any ideas?
There are a number of ways to do this, in rough order of complexity (easiest first):
use NSObject's performSelectorOnMainThread:withObject:waitUntilDone: which is pretty self explanatory.
use performSelector:onThread:withObject:waitUntilDone:, which will let you go the other way
use an NSNotification (and NSDistributedNotificationCenter), though you can easily run into a race condition if you're not careful
Use NSPorts to send data back and forth
Check out the doc that Abizer mentioned for details on all of these.
performSelectorOnMainThread:withObject:waitUntilDone: is often the easiest way to update the UI with a background thread's progress. You could also create your own storage area that's safe to access between threads using NSLock or a similar mechanism, or even use distributed objects (which also works between processes or over a network).
Then there's NSOperationQueue and NSOperation which does help a lot to simplify multi-threaded programming, although a lot of programmers have been avoiding it since it can cause a crash in certain circumstances under Leopard.
Have a look at the Apple docs for this.
You may need to create an ADC member account, but this is free
Multi-threaded Cocoa Programs

Can I be sure that the code I write is always executed in the same thread?

I normally work on single threaded applications and have generally never really bothered with dealing with threads. My understanding of how things work - which certainly, may be wrong - is that as long as we're always dealing with single threaded code (i.e. no forks or anything like that) it will always be executed in the same thread.
Is this assumption correct? I have a fuzzy idea that UI libraries/frameworks may spawn off threads of their own to handle GUI stuff (which accounts for the fact that the Windows task manager tells me that my 'single threaded' application is actually running on 10 threads) but I'm guessing that this shouldn't affect me?
How does this apply to COM? For instance, if I were to create an instance of a COM component in my code; and that COM component writes some information to a thread-based location (using System.Threading.Thread.GetData for instance) will my application be able to get hold of that information?
So in summary:
In single threaded code, can I be sure that whatever I store in a thread-based location can be retrievable from anywhere else in the code?
If that single threaded code were to create an instance of a COM component which stores some information in a thread-based location, can that be similarly retrievable from anywhere else?
UI usually has the opposite constraint (sadly): it's single threaded and everything must happen on that thread.
The easiest way to check if you are always in the same thread (for, say, a function) is to have an integer variable set at -1, and have a check function like (say you are in C#):
void AssertSingleThread()
{
if (m_ThreadId < 0) m_ThreadId = Thread.CurrentThread.ManagedThreadId;
Debug.Assert(m_ThreadId == Thread.CurrentThread.ManagedThreadId);
}
That said:
I don't understand the question #1, really. Why store in a thread-based location if your purpose is to have a global scope ?
About the second question, most COM code runs on a single thread and, most often, on the thread where your UI message processing lives - this is because most COM code is designed to be compatible with VB6 which is single-thread.
The reason your program has about 10 threads is because both Windows (if you use some of its features like completion ports, or some kind of timers) and the CLR (for example for the GC or, again, some types of timers) may create threads in your process space (technically any program with enough priviledges, can too).
Think about having the model of having a single dataStore class running in your mainThread that all threads can read and write their instance variables to. This will avoid a lot of problems that might arise accessing threads all over the shop.
Simple idea, until you reach the fun part of threading. Concurrency and synchronization; simply, if you have two threads that want to read and write to the same variable inside dataStore at the same time, you have a problem.
Java handles this by allowing you to declare a variable or method synchronized, allowing only one thread access at a time.
I believe some .NET objects have Lock and Synchronized methods defined on them, but I know no more than this.

Resources