Clarification on Threads and Run Loops In Cocoa - cocoa

I'm trying to learn about threading and I'm thoroughly confused. I'm sure all the answers are there in the apple docs but I just found it really hard to breakdown and digest. Maybe somebody could clear a thing or 2 up for me.
1)performSelectorOnMainThread
Does the above simply register an event in the main run loop or is it somehow a new thread even though the method says "mainThread"? If the purpose of threads is to relieve processing on the main thread how does this help?
2) RunLoops
Is it true that if I want to create a completely seperate thread I use
"detachNewThreadSelector"? Does calling start on this initiate a default run loop for the thread that has been created? If so where do run loops come into it?
3) And Finally , I've seen examples using NSOperationQueue. Is it true to say that If you use performSelectorOnMainThread the threads are in a queue anyway so NSOperation is not needed?
4) Should I forget about all of this and just use the Grand Central Dispatch instead?

Run Loops
You can think of a Run Loop to be an event processing for-loop associated to a thread. This is provided by the system for every thread, but it's only run automatically for the main thread.
Note that running run loops and executing a thread are two distinct concepts. You can execute a thread without running a run loop, when you're just performing long calculations and you don't have to respond to various events.
If you want to respond to various events from a secondary thread, you retrieve the run loop associated to the thread by
[NSRunLoop currentRunLoop]
and run it. The events run loops can handle is called input sources. You can add input sources to a run-loop.
PerformSelector
performSelectorOnMainThread: adds the target and the selector to a special input source called performSelector input source. The run loop of the main thread dequeues that input source and handles the method call one by one, as part of its event processing loop.
NSOperation/NSOperationQueue
I think of NSOperation as a way to explicitly declare various tasks inside an app which takes some time but can be run mostly independently. It's easier to use than to detach the new thread yourself and maintain various things yourself, too. The main NSOperationQueue automatically maintains a set of background threads which it reuses, and run NSOperations in parallel.
So yes, if you just need to queue up operations in the main thread, you can do away with NSOperationQueue and just use performSelectorOnMainThread:, but that's not the main point of NSOperation.
GCD
GCD is a new infrastructure introduced in Snow Leopard. NSOperationQueue is now implemented on top of it.
It works at the level of functions / blocks. Feeding blocks to dispatch_async is extremely handy, but for a larger chunk of operations I prefer to use NSOperation, especially when that chunk is used from various places in an app.
Summary
You need to read Official Apple Doc! There are many informative blog posts on this point, too.

1)performSelectorOnMainThread
Does the above simply register an event in the main run loop …
You're asking about implementation details. Don't worry about how it works.
What it does is perform that selector on the main thread.
… or is it somehow a new thread even though the method says "mainThread"?
No.
If the purpose of threads is to relieve processing on the main thread how does this help?
It helps you when you need to do something on the main thread. A common example is updating your UI, which you should always do on the main thread.
There are other methods for doing things on new secondary threads, although NSOperationQueue and GCD are generally easier ways to do it.
2) RunLoops
Is it true that if I want to create a completely seperate thread I use "detachNewThreadSelector"?
That has nothing to do with run loops.
Yes, that is one way to start a new thread.
Does calling start on this initiate a default run loop for the thread that has been created?
No.
I don't know what you're “calling start on” here, anyway. detachNewThreadSelector: doesn't return anything, and it starts the thread immediately. I think you mixed this up with NSOperations (which you also don't start yourself—that's the queue's job).
If so where do run loops come into it?
Run loops just exist, one per thread. On the implementation side, they're probably lazily created upon demand.
3) And Finally , I've seen examples using NSOperationQueue. Is it true to say that If you use performSelectorOnMainThread the threads are in a queue anyway so NSOperation is not needed?
These two things are unrelated.
performSelectorOnMainThread: does exactly that: Performs the selector on the main thread.
NSOperations run on secondary threads, one per operation.
An operation queue determines the order in which the operations (and their threads) are started.
Threads themselves are not queued (except maybe by the scheduler, but that's part of the kernel, not your application). The operations are queued, and they are started in that order. Once started, their threads run in parallel.
4) Should I forget about all of this and just use the Grand Central Dispatch instead?
GCD is more or less the same set of concepts as operation queues. You won't understand one as long as you don't understand the other.
So what are all these things good for?
Run loops
Within a thread, a way to schedule things to happen. Some may be scheduled at a specific date (timers), others simply “whenever you get around to it” (sources). Most of these are zero-cost when idle, only consuming any CPU time when the thing happens (timer fires or source is signaled), which makes run loops a very efficient way to have several things going on at once without any threads.
You generally don't handle a run loop yourself when you create a scheduled timer; the timer adds itself to the run loop for you.
Threads
Threads enable multiple things to happen at the exact same time on different processors. Thing 1 can happen on thread A (on processor 1) while thing 2 happens on thread B (on processor 0).
This can be a problem. Multithreaded programming is a dance, and when two threads try to step in the same place, pain ensues. This is called contention, and most discussion of threaded programming is on the topic of how to avoid it.
NSOperationQueue and GCD
You have a thing you need done. That's an operation. You can't have it done on the main thread, or you'd simply send a message like normal; you need to run it in the background, on a secondary thread.
To achieve this, express it as either an NSOperation object (you create a subclass of NSOperation and instantiate it) or a block (or both), then add it to either an NSOperationQueue (NSOperations, including NSBlockOperation) or a dispatch queue (bare block).
GCD can be used to make things happen on the main thread, as well; you can create serial queues and add blocks to them. A serial queue, as its name suggests, will run exactly one block at a time, rather than running a bunch of them in parallel.
So what should I do?
I would not recommend creating threads directly. Use NSOperationQueue or GCD instead; they force you into better thinking habits that will reduce the risk of your threaded code inducing headaches.
For things that run periodically, not fitting into the “thing I need done” model of NSOperations and GCD blocks, consider just using the run loop on the main thread. Chances are, you don't need to put it on a thread after all. A rendering loop in a 3D game, for example, can be a simple timer.

Related

Invoke Mono.block() through "nioEventloopGroup-*" threads would end up leading all the threads hang

The project I am working for is using Spring WebFlux. I came across a very odd issue.
The detail is that some of pieces of code are purely wrote in Reactor style (couples of Flux/Mono pipelines), however, in a inner publishers, I have to call a method where there is "Mono.block()" inside.
The weird thing I aware is that the whole service would become totally stuck, and when I captured a thread dump, I saw all those "nioEventLoopGroup-*" threads were hung.
A fun fact is that if I leverage a "simple" thread (new Thread(..)) to call the method (there is .block inside), everything works fine.
So my question is that, are those "nioEventLoopGroup-*" threads not allowed to call any blocking code.
Sorry for asking a dumb question, but it's blocking issue for now, so I am looking forward your insight.
Reactor, by default, uses a fixed size thread pool. When you use block(), the actual work needs to be done in some thread or another, which depends on the nature of the subscription and the Mono/Flux. Most likely a set of new tasks will be scheduled on the same scheduler, but block() will suspend its thread, waiting for those tasks to complete, so there is one fewer thread for those other tasks to be scheduled on. Evidently you have enough of these calls to exhauast the entire thread pool. All your block() calls are waiting for other tasks to complete, but there are no threads available for them to be run on.
There's no reason to call block() inside a mapping in a reactive stream. There are always other ways of achieving the same goal without blocking - flatMap(), zip() etc etc.

How is wait_for_completion different from wakeup_interruptible

How is wait_for_completion different from wakeup_interruptible?
Actually the question is how completion chains is different from wait queues ?
It looks the same concept to me
completion structure internally uses the wait queues and locks.
completion structure was introduced to address a very common occurring scenario, where multiple threads are waiting on some event. Once that event happens, you want only one of the waiting thread to start running.
The key here is that kernel developers don't have to implement and maintain the waiting queue , which makes life of a kernel developer easy.
Adding on Harman answer, I would also say that those two functions are called in different context: wakeup_interruptible() will wake up all threads waiting on a wait_queue, whereas wait_for_completion() will wait until a specific task completes. Those are two different things to me.

MFC CEvent class member function SetEvent , difference with Thread Lock() function?

what i s the difference between SetEvent() and Thread Lock() function? anyone please help me
Events are used when you want to start/continue processing once a certain task is completed i.e. you want to wait until that event occurs. Other threads can inform the waiting thread about the completion of this task using SetEvent.
On the other hand, critical section is used when you want only one thread to execute a block of code at a time i.e. you want a set of instructions to be executed by one thread without any other thread changing the state at that time. For example, you are inserting an item into a linked list which involves multiple steps, at that time you don't want another thread to come and try to insert one more object into the list. So you block the other thread until first one finishes using critical sections.
Events can be used for inter-process communication, ie synchronising activity amongst different processes. They are typically used for 'signalling' the occurrence of an activity (e.g. file write has finished). More information on events:
http://msdn.microsoft.com/en-us/library/windows/desktop/ms686915%28v=vs.85%29.aspx
Critical sections can only be used within a process for synchronizing threads and use a basic lock/unlock concept. They are typically used to protect a resource from multi-threaded access (e.g. a variable). They are very cheap (in CPU terms) to use. The inter-process variant is called a Mutex in Windows. More info:
http://msdn.microsoft.com/en-us/library/windows/desktop/ms682530%28v=vs.85%29.aspx

Forcing context switch in Windows

Is there a way to force a context switch in C++ to a specific thread, assuming I have the thread handle or thread ID?
No, you won't be able to force operating system to run the thread you want. You can use yield to force a context switch though...
yield in Win32 API is function SwitchToThread. If there is no other thread available for running, then a ZERO value will be returned and current thread will keep running anyway.
You can only encourage the Windows thread scheduler to pick a certain thread, you can't force it. You do so first by making the thread block on a synchronization object and signaling it. Secondary by bumping up its priority.
Explicit context switching is supported, you'll have to use fibers. Review SwitchToFiber(). A fiber is not a thread by a long shot, it is similar to a co-routine of old. Fibers' heyday has come and gone, they are not competitive with threads anymore. They have very crappy cpu cache locality and cannot take advantage of multiple cores.
The only way to force a particular thread to run is by using process/thread affinity, but I can't imagine ever having a problem for which this was a reasonable solution.
The only way to force a context switch is to force a thread onto a different processor using affinity.
In other words, what you are trying to do isn't really viable.
Calling SwitchToThread() will result in a context switch if there is another thread ready to run that are eligible to run on this processor. The documentation states it as follows:
If calling the SwitchToThread function
causes the operating system to switch
execution to another thread, the
return value is nonzero.
If there are no other threads ready to
execute, the operating system does not
switch execution to another thread,
and the return value is zero.
You can temporarily bump the priority of the other thread, while looping with Sleep(0) calls: this passes control to other threads. Suppose that the other thread has increased a lock variable and you need to wait until it becomes zero again:
// Wait until other thread releases lock
SetThreadPriority(otherThread, THREAD_PRIORITY_HIGHER);
while (InterlockedRead(&lock) != 0)
Sleep(0);
SetThreadPriority(otherThread, THREAD_PRIORITY_NORMAL);
I would check out the book Concurrent Programming for Windows. The scheduler seems to do a few things worth noting.
Sleep(0) only yields to higher priority threads (or possibly others at the same priority). This means you cannot fix priority inversion situations with just a Sleep(0), where other lower priority threads need to run. You must use SwitchToThread, Sleep a non-zero duration, or fully block on some kernel HANDLE.
You can create two synchronization objects (such as two events) and use the API SignalObjectAndWait.
If the hObjectToWaitOn is non-signaled and your other thread is waiting on the hObjectToSignal, the OS can theoretically perform quick context switch inside this API, before end of time slice.
And if you want the current thread to automatically resume, simply inform a small value (such as 50 or 100) on the dwMilliseconds.

Are aysnchronous NSURLConnections multi-threaded

I've noticed that if I create an NSURLConnection and fire the request, all is well. My delegate methods get called and the last delegate method get's called well after the code block invoking the connection completes. Great.
That leads me to believe the connections are asynchronous which implies that they're multi-threaded. Is that correct? Could they be asynchronous but in the same thread? No, that's crazy - right?
But, in every example I've seen using an NSOperation, NSURLConnections are always scheduledInRunLoop after which [runLoop runMode ...] is invoked in a while loop.
Can someone explain exactly what is happening here? It seems to me that the first case requires spawning secondary threads but no manual invocation of the run loop (on those threads) while NSOperation (in a new thread) does require manual invocation of the run loop.
Why is no manual invocation required for the first case?
NSURLConnection does spawn a single background thread to manage all instances of itself, but this is generally irrelevant to the caller, since the delegate calls are made on whatever thread owns the runloop the connection was scheduled in. (This fact turned out to be very relevant to me recently, but these things really only come up when dealing with insane crashers in multi-threaded apps.)
For more caller-relevant details, you should look at the docs for -[NSURLConnection scheduleInRunLoop:forMode:]. It explains how to manually handle scheduling and unscheduling, and how NSURLConnections behave in a multi-threaded environment.
If you are unclear on how run loops work and how they perform asynchronous actions without requiring additional threads, you should read Run Loops in the Threading Programming Guide. This is a very important topic for moving to more advanced Cocoa development.
Because the main thread already has a run loop, I'd imagine.
If you want to run NSURLConnection in another thread, you should create a run loop like this in your thread's main method:
while (!finished)
{
[[NSRunLoop currentRunLoop] runUntilDate:[NSDate dateWithTimeIntervalSinceNow:1]];
}

Resources