Transforming a code to Grand Central Dispatch - cocoa

I have an array of NSNumbers that have to pass thru 20 tests. If one test fails than the array is invalid if all tests pass than the array is valid. I am trying to do it in a way that as soon as the first failure happens it stops doing the remaining tests. If a failure happens on the 3rd test then stop evaluating other tests.
I am trying to convert the code I have that is serial processing, to parallel processing with grand central dispatch, but I cannot wrap my head around it.
This is what I have.
First the definition of the tests to be done. This array is used to run the tests.
Every individual test returns YES when it fails and NO when it is ok.
#define TESTS #[ \
#"averageNotOK:", \
#"numbersOverRange:", \
#"numbersUnderRange:",\
#"numbersForbidden:", \
// ... etc etc
#"numbersNotOnCurve:"]
- (BOOL) numbersPassedAllTests:(NSArray *)numbers {
NSInteger count = [TESTS count];
for (int i=0; i<count; i++) {
NSString *aMethodName = TESTS[i];
SEL selector = NSSelectorFromString(aMethodName);
BOOL failed = NO;
NSMethodSignature *signature = [[self class] instanceMethodSignatureForSelector:selector];
NSInvocation *invocation = [NSInvocation invocationWithMethodSignature:signature];
[invocation setSelector:selector];
[invocation setTarget:self];
[invocation setArgument:&numbers atIndex:2];
[invocation invoke];
[invocation getReturnValue:&failed];
if (failed) {
return NO;
}
}
return YES;
}
This work perfectly but perform the tests sequentially.
How do I do that perform these tests in parallel executing the less amount of tests as needed?

I assume you've spotted dispatch_apply which is the trivial parallel for. You've realised it can't do an early exit. Hence the question.
I'm afraid the answer is you'll need to do some bookkeeping for yourself, but luckily it shouldn't be too hard. To avoid repeating what you've got, pretend I'd turned the stuff inside your loop into:
BOOL failedTest(int);
So your serial loop looks like:
for (int i=0; i<count; i++) {
if(failedTest(i))
return NO;
}
return YES;
Then you might do:
#import <libkern/OSAtomic.h>
volatile __block int32_t hasFailed = 0;
dispatch_apply(
count,
dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0),
^(size_t i)
{
// do no computation if somebody else already failed
if(hasFailed) return;
if(failedTest(i))
OSAtomicIncrement32(&hasFailed);
});
return !hasFailed;
So it'll keep starting tests until one of them has previously failed. The OSAtomicIncrement32 just ensures atomicity without requiring a mutex. It'll usually turn into a cheap single instruction. You could get away with just using a BOOL as atomicity isn't really going to be a problem but why not just do it properly?
EDIT: also, you could just use #selector directly and create an array of selectors rather than using NSSelectorFromString with an array of strings, to save lookup time. If your tests are really cheap then consider doing them part serial, part parallel by having the dispatch_apply do, say, count/10 dispatches and having each dispatch do 10 tests. Otherwise GCD will just issue count instances of the block and issuing has an associated cost.

Related

cocoa: why for-each is faster than for loop?

My application read all people in contact in two ways:
for-loop:
CFAbsoluteTime startTime = CFAbsoluteTimeGetCurrent ();
long count = macContact.addressBook.people.count;
for(int i=0;i<count;++i){
ABPerson *person = [macContact.addressBook.people objectAtIndex:i];
NSLog(#"%#",person);
}
NSLog(#"%f",CFAbsoluteTimeGetCurrent() - startTime);
for-each
CFAbsoluteTime startTime = CFAbsoluteTimeGetCurrent ();
for(ABPerson *person in macContact.addressBook.people){
NSLog(#"%#",person);
}
NSLog(#"%f",CFAbsoluteTimeGetCurrent() - startTime);
for-each only took 4 seconds to enumerate 5000 people in addressBook, while for-loop took 10 minutes to do the same job.
I want to know why there is a huge difference in performance?
The difference in performance almost certainly has to do with the macContact.addressBook.people part. You're calling that every single time through the for loop but only once with the for-each loop. I'm guessing either the addressBook or the people properties are not returning cached data but rather new data every time.
Try using
NSArray *people = macContact.addressBook.people;
for (int i = 0; i < [people count]; i++) {
NSLog(#"%#", [people objectAtIndex:i];
}
You'll probably find the performance is very similar again.
That said, for-each is faster than for in general. The reason is because a for loop invokes a method send on every single pass through the loop (-objectAtIndex:), whereas for-each can fetch the objects much more efficiently by grabbing them in large batches.
In more recent versions of the OS you can go a step further and use a block-based enumeration method. This looks like
[macContact.addressBook.people enumerateObjectsUsingBlock:^(id obj, NSUInteger idx, BOOL stop){
NSLog(#"%#", obj);
}];
For NSArrays this should have very similar performance to a for-each loop. For other data structures such as dictionaries this style can be faster because it can fetch the value along with the key (whereas a for-each only gives you the key and requires you to use a message send to get the value).

Mutating array while reading, not enumerating

If I have two different threads via GCD accessing an NSMutableArray and one is merely creating a new array based off the mutable array while the other thread is deleting records from the array, should I expect this to be a problem? That is, shouldn't the copy, which I presume is merely "reading" the array, just get whatever happens to be in the array at that moment? I am not enumerating the array in either thread, but it is still crashing. As soon as I remove the read routine, it works fine.
Here is the "read" :
dispatch_async(saveQueue, ^{
NSDictionary*tempstocks=[NSDictionary dictionaryWithDictionary:self.data];
It crashes on this thread with: *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[__NSPlaceholderDictionary initWithObjects:forKeys:count:]: attempt to insert nil object from objects[9]'
Here is what is happening on another thread:
[self.data removeObjectForKey:item];
I know you cannot mutate while enumerating, but I'd think it would be okay to read while mutating, you might not know which version of the mutated object you get, but I wouldn't think this is a problem, but clearly it is. Perhaps the dictionaryWithDictionary method is performing an operation that first sees X objects but by the time the routine is done it contains X-Y objects, thus it is not "capturing" the entire self.data dictionary in one snap when it runs dictionaryWithDictionary and is instead enumerating over self.data which would essentially be the same problem as mutation while enumeration?
I guess that you might create three different queues using GCD: one for save, second one for something else and last one to operate with NSMutableArray.
dispatch_async(saveQueue, ^{
dispatch_barrier_async(_queue, ^{
NSDictionary*tempstocks=[NSDictionary dictionaryWithDictionary:self.data];
});
});
dispatch_async(anotherQueue, ^{
dispatch_barrier_async(_queue, ^{
[self.data removeObjectForKey:item];
});
});
It's like #synchronize but using GCD.
More info: GCD Reference/dispatch_barrier_async and http://www.mikeash.com/pyblog/friday-qa-2011-10-14-whats-new-in-gcd.html
EDIT
I have made a couple of performance test in order to understand which of the way is faster:
- (void)usingSynchronized
{
dispatch_queue_t writeQyeue = dispatch_queue_create("com.tikhop.writeQyeue", DISPATCH_QUEUE_CONCURRENT);
dispatch_sync(writeQyeue, ^{
for(size_t i=0; i<10000; i++)
#synchronized (arr) {
[arr replaceObjectAtIndex:0 withObject:[NSNumber numberWithInt:1]];
[arr replaceObjectAtIndex:0 withObject:[NSNumber numberWithInt:2]];
[arr replaceObjectAtIndex:0 withObject:[NSNumber numberWithInt:3]];
[arr replaceObjectAtIndex:0 withObject:[NSNumber numberWithInt:4]];
}
});
}
- (void)usingGCD
{
dispatch_queue_t writeQyeue = dispatch_queue_create("com.tikhop.writeQyeue", DISPATCH_QUEUE_CONCURRENT);
dispatch_sync(writeQyeue, ^{
for(size_t i=0; i<10000; i++)
dispatch_barrier_async(_queue, ^{
[arr replaceObjectAtIndex:0 withObject:[NSNumber numberWithInt:5]];
[arr replaceObjectAtIndex:0 withObject:[NSNumber numberWithInt:6]];
[arr replaceObjectAtIndex:0 withObject:[NSNumber numberWithInt:7]];
[arr replaceObjectAtIndex:0 withObject:[NSNumber numberWithInt:8]];
});
});
}
arr = [NSMutableArray arrayWithCapacity:1];
[arr addObject:#(0)];
[self usingSynchronized];
[self usingGCD];
I got the following result:
You cannot assume that any operation on NSDictionary is thread-safe. And almost all f them are not. You really need to set up a mutex, #synchronize access to your array or use a gcd serial queue for access.
dictionaryWithDictionary: is internally enumerating the argument, so you are basically mutating while enumerating.
Also, in general, you should never write to an object if another thread is going to access it in any way unless you use some sort of synchronization primitive.
Your reasoning that it "reads" whatever it's there at the moment is not valid in general. Here is a little more info on the problems inherent in multithreading Usage of registers by the compiler in multithreaded program

Cocos2d: troubles scheduling call a method multiple times at specific time intervals

I ran the following code expecting to schedule three subsequent calls, at different time intervals (e.g. after 1 sec, after 2.6sec etc..) on the method "displayWarningMessage" but didn't work (it displayed the massage only the first time).
I don't find a method signature in the scheduler that would do the job of displaying it multiple times and with a specific delay. Anyone has some suggestion?
[self scheduleOnce:#selector(displayWarningMessage) delay:0.7f];
[self scheduleOnce:#selector(displayWarningMessage) delay:1.7f];
[self scheduleOnce:#selector(displayWarningMessage) delay:3.7f];
Problem here is, when you call first schedule it is scheduled successfully. But the next immediate call is throwing warning something
CCScheduler#scheduleSelector. Selector already scheduled. Updating interval from: X.2 to X.2
you can see this in the log.
What you can do is when the selector is called, at the end of the method you can schedule it again for the next time, until you are done. You may take a counter to keep track of how many times it has been called, put all of your intervals in an array and then schedule next selector for the interval at the specific index identified by counter. like this:
NSArray *intervals = [NSArray arrayWithObjects:[NSNumber numberWithFloat:0.7],[NSNumber numberWithFloat:1.7],[NSNumber numberWithFloat:3.7], nil];
int counter = 0;
//schedule it for the first time with object at index counter/index 0
[self scheduleOnce:#selector(displayWarningMessage) delay:[(NSNumber *)[intervals objectAtIndex:counter]] floatValue];
now in your selector, do something like this:
-(void)displayWarningMessage
{
//do all your stuff here
//increment counter
counter ++;
if(counter < [intervals count])
{
//schedule it for the next time with object at index counter/index
[self scheduleOnce:#selector(displayWarningMessage) delay:[(NSNumber *)[intervals objectAtIndex:counter]] floatValue];
}
}
intervals and counter should be class ivars of-course.
Try this:
- (void)displayWarningMessage {
//Stuff
}
- (void)callStuff {
CCCallFunc *call = [CCCallFunc actionWithTarget:self selector:#selector(displayWarningMessage)];
CCDelayTime *delay1 = [CCDelayTime actionWithDuration:0.7f];
CCDelayTime *delay2 = [CCDelayTime actionWithDuration:1.7f];
CCDelayTime *delay3 = [CCDelayTime actionWithDuration:3.7f];
CCSequence *actionToRun = [CCSequence actions:delay1, call, delay2, call, delay3, call, nil];
[self runAction:actionToRun];
}
That should work for what you're trying to do, at least that's how I'd imagine doing it. I'm fairly sure you can call that CCCallFunc multiple times in one CCSequence without having to create it three individual times. You could also make those delays variable based if need be, of course. Let me know how it goes.
Method is created.
[self schedule: #selector(displayWarningMessage:) interval:3.2f];
-(void) displayWarningMessage:(ccTime) delta
{
CCLOG(#"alert........!!!!!!");
}
Use the Calling method in not warning message detected.

UIActivityIndicatorView for long computational process

I have a computational process that takes quite a bit of time to perform so a UIActivityIndicatorView seems appropriate. I have a button to initiate the computation.
I've tried putting the command [calcActivity startAnimating]; at the beginning of the computation in an IBAction and [calcActivity stopAnimating]; at the end of the computation but nothing shows.
Next, I created a new IBAction to contain the starting and stopping with a call to the computation IBAction and a dummy for loop just to give the startAnimating a little chance to get started between the two. This doesn't work either.
The skeletal code looks like this:
- (IBAction)computeNow:(id)sender {
[calcActivity startAnimating];
for (int i=0; i<1000; ++i) { }
[self calcStats];
[calcActivity stopAnimating];
}
- (IBAction)calcStats {
// do lots of calculations here
return;
}
Ok, as I commented, you should never performe complex calculations in your main thread. It not only leads to situations like yours, your app might also be rejected from the store.
Now, the reason for the UIActivityIndicatorView not being updated is, that the UI doesn't actually update itself e.g. when you call [calcActivity startAnimating]; Instead, it gets updated after your code ran through. In your case, that means that startAnimating and stopAnimating are getting called at once, so nothing really happens.
So, the 'easy' solution: Start a new thread, using either this techniques or, probably better, GCD.
Thanks for the nudge, Phlibbo. I'm new to this game and appreciate all the help. I didn't comprehend all the info on the links you provided, but it did prod me to search further for examples. I found one that works well. The IBAction 'computeNow' is triggered by the calculation button. The code now looks like this:
- (IBAction)computeNow {
[calcActivity startAnimating];
[self performSelector:#selector(calcStats) withObject:nil afterDelay:0];
return;
}
- (void) calcStats {
// Lots of tedious calculations
[calcActivity stopAnimating];
}

using dispatch_sync in Grand Central Dispatch

Can anyone explain with really clear use cases what the purpose of dispatch_sync in GCD is for? I can't understand where and why I would have to use this.
Thanks!
You use it when you want to execute a block and wait for the results.
One example of this is the pattern where you're using a dispatch queue instead of locks for synchronization. For example, assume you have a shared NSMutableArray a, with access mediated by dispatch queue q. A background thread might be appending to the array (async), while your foreground thread is pulling the first item off (synchronously):
NSMutableArray *a = [[NSMutableArray alloc] init];
// All access to `a` is via this dispatch queue!
dispatch_queue_t q = dispatch_queue_create("com.foo.samplequeue", NULL);
dispatch_async(q, ^{ [a addObject:something]; }); // append to array, non-blocking
__block Something *first = nil; // "__block" to make results from block available
dispatch_sync(q, ^{ // note that these 3 statements...
if ([a count] > 0) { // ...are all executed together...
first = [a objectAtIndex:0]; // ...as part of a single block...
[a removeObjectAtIndex:0]; // ...to ensure consistent results
}
});
First understand its brother dispatch_async
//Do something
dispatch_async(queue, ^{
//Do something else
});
//Do More Stuff
You use dispatch_async to create a new thread. When you do that, the current thread will not stop. That means //Do More Stuff may be executed before //Do something else finish
What happens if you want the current thread to stop?
You do not use dispatch at all. Just write the code normally
//Do something
//Do something else
//Do More Stuff
Now, say you want to do something on a DIFFERENT thread and yet wait as if and ensure that stuffs are done consecutively.
There are many reason to do this. UI update, for example, is done on main thread.
That's where you use dispatch_sync
//Do something
dispatch_sync(queue, ^{
//Do something else
});
//Do More Stuff
Here you got //Do something //Do something else and //Do More stuff done consecutively even though //Do something else is done on a different thread.
Usually, when people use different thread, the whole purpose is so that something can get executed without waiting. Say you want to download large amount of data but you want to keep the UI smooth.
Hence, dispatch_sync is rarely used. But it's there. I personally never used that. Why not ask for some sample code or project that does use dispatch_sync.
dispatch_sync is semantically equivalent to a traditional mutex lock.
dispatch_sync(queue, ^{
//access shared resource
});
works the same as
pthread_mutex_lock(&lock);
//access shared resource
pthread_mutex_unlock(&lock);
David Gelhar left unsaid that his example will work only because he quietly created serial queue (passed NULL in dispatch_queue_create what is equal to DISPATCH_QUEUE_SERIAL).
If you wish create concurrent queue (to gain all of multithread power), his code will lead to crash because of NSArray mutation (addObject:) during mutation (removeObjectAtIndex:) or even bad access (NSArray range beyond bounds). In that case we should use barrier to ensure exclusive access to the NSArray while the both blocks run. Not only does it exclude all other writes to the NSArray while it runs, but it also excludes all other reads, making the modification safe.
Example for concurrent queue should look like this:
NSMutableArray *a = [[NSMutableArray alloc] init];
// All access to `a` is via this concurrent dispatch queue!
dispatch_queue_t q = dispatch_queue_create("com.foo.samplequeue", DISPATCH_QUEUE_CONCURRENT);
// append to array concurrently but safely and don't wait for block completion
dispatch_barrier_async(q, ^{ [a addObject:something]; });
__block Something *first = nil;
// pop 'Something first' from array concurrently and safely but wait for block completion...
dispatch_barrier_sync(q, ^{
if ([a count] > 0) {
first = [a objectAtIndex:0];
[a removeObjectAtIndex:0];
}
});
// ... then here you get your 'first = [a objectAtIndex:0];' due to synchronised dispatch.
// If you use async instead of sync here, then first will be nil.
If you want some samples of practical use look at this question of mine:
How do I resolve this deadlock that happen ocassionally?
I solve it by ensuring that my main managedObjectContext is created on the main thread. The process is very fast and I do not mind waiting. Not waiting means I will have to deal with a lot of concurency issue.
I need dispatch_sync because some code need to be done on main thread, which is the different thread than the one where to code is being executed.
So basically if you want the code to
1. Proceed like usual. You don't want to worry about race conditions. You want to ensure that the code is completed before moving on.
2. Done on a different thread
use dispatch_sync.
If 1 is violated, use dispatch_async. If 2 is violated just write the code like usual.
So far, I only do this once, namely when something need to be done on main thread.
So here's the code:
+(NSManagedObjectContext *)managedObjectContext {
NSThread *thread = [NSThread currentThread];
//BadgerNewAppDelegate *delegate = [BNUtilitiesQuick appDelegate];
//NSManagedObjectContext *moc = delegate.managedObjectContext;
if ([thread isMainThread]) {
//NSManagedObjectContext *moc = [self managedObjectContextMainThread];
return [self managedObjectContextMainThread];
}
else{
dispatch_sync(dispatch_get_main_queue(),^{
[self managedObjectContextMainThread];//Access it once to make sure it's there
});
}
// a key to cache the context for the given thread
NSMutableDictionary *managedObjectContexts =[self thread].managedObjectContexts;
#synchronized(self)
{
if ([managedObjectContexts objectForKey:[self threadKey]] == nil ) {
NSManagedObjectContext *threadContext = [[NSManagedObjectContext alloc] initWithConcurrencyType:NSPrivateQueueConcurrencyType];
threadContext.parentContext = [self managedObjectContextMainThread];
//threadContext.persistentStoreCoordinator= [self persistentStoreCoordinator]; //moc.persistentStoreCoordinator;// [moc persistentStoreCoordinator];
threadContext.mergePolicy = NSMergeByPropertyObjectTrumpMergePolicy;
[managedObjectContexts setObject:threadContext forKey:[self threadKey]];
}
}
return [managedObjectContexts objectForKey:[self threadKey]];
}
dispatch_sync is mainly used inside dispatch_async block to perform some operations on main thread(like update ui).
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
//Update UI in main thread
dispatch_sync(dispatch_get_main_queue(), ^{
self.view.backgroundColor = color;
});
});
Here's a half-way realistic example. You have 2000 zip files that you want to analyze in parallel. But the zip library isn't thread-safe. Therefore, all work that touches the zip library goes into the unzipQueue queue. (The example is in Ruby, but all calls map directly to the C library. "apply", for example, maps to dispatch_apply(3))
#!/usr/bin/env macruby -w
require 'rubygems'
require 'zip/zipfilesystem'
#unzipQueue = Dispatch::Queue.new('ch.unibe.niko.unzipQueue')
def extractFile(n)
#unzipQueue.sync do
Zip::ZipFile.open("Quelltext.zip") { |zipfile|
sourceCode = zipfile.file.read("graph.php")
}
end
end
Dispatch::Queue.concurrent.apply(2000) do |i|
puts i if i % 200 == 0
extractFile(i)
end
I've used dispatch sync when inside an async dispatch to signal UI changes back to the main thread.
My async block holds back only a little and I know the main thread is aware of the UI changes and will action them. Generally used this in a processing block of code that takes some CPU time but I still want to action UI changes from within that block. Actioning the UI changes in the async block is useless as UI, I believe, runs on the main thread. Also actioning them as secondary async blocks, or a self delegate, results in the UI only seeing them a few seconds later and it looks tardy.
Example block:
dispatch_queue_t myQueue = dispatch_queue_create("my.dispatch.q", 0);
dispatch_async(myQueue,
^{
// Do some nasty CPU intensive processing, load file whatever
if (somecondition in the nasty CPU processing stuff)
{
// Do stuff
dispatch_sync(dispatch_get_main_queue(),^{/* Do Stuff that affects UI Here */});
}
});

Resources