Mutating array while reading, not enumerating - cocoa

If I have two different threads via GCD accessing an NSMutableArray and one is merely creating a new array based off the mutable array while the other thread is deleting records from the array, should I expect this to be a problem? That is, shouldn't the copy, which I presume is merely "reading" the array, just get whatever happens to be in the array at that moment? I am not enumerating the array in either thread, but it is still crashing. As soon as I remove the read routine, it works fine.
Here is the "read" :
dispatch_async(saveQueue, ^{
NSDictionary*tempstocks=[NSDictionary dictionaryWithDictionary:self.data];
It crashes on this thread with: *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[__NSPlaceholderDictionary initWithObjects:forKeys:count:]: attempt to insert nil object from objects[9]'
Here is what is happening on another thread:
[self.data removeObjectForKey:item];
I know you cannot mutate while enumerating, but I'd think it would be okay to read while mutating, you might not know which version of the mutated object you get, but I wouldn't think this is a problem, but clearly it is. Perhaps the dictionaryWithDictionary method is performing an operation that first sees X objects but by the time the routine is done it contains X-Y objects, thus it is not "capturing" the entire self.data dictionary in one snap when it runs dictionaryWithDictionary and is instead enumerating over self.data which would essentially be the same problem as mutation while enumeration?

I guess that you might create three different queues using GCD: one for save, second one for something else and last one to operate with NSMutableArray.
dispatch_async(saveQueue, ^{
dispatch_barrier_async(_queue, ^{
NSDictionary*tempstocks=[NSDictionary dictionaryWithDictionary:self.data];
});
});
dispatch_async(anotherQueue, ^{
dispatch_barrier_async(_queue, ^{
[self.data removeObjectForKey:item];
});
});
It's like #synchronize but using GCD.
More info: GCD Reference/dispatch_barrier_async and http://www.mikeash.com/pyblog/friday-qa-2011-10-14-whats-new-in-gcd.html
EDIT
I have made a couple of performance test in order to understand which of the way is faster:
- (void)usingSynchronized
{
dispatch_queue_t writeQyeue = dispatch_queue_create("com.tikhop.writeQyeue", DISPATCH_QUEUE_CONCURRENT);
dispatch_sync(writeQyeue, ^{
for(size_t i=0; i<10000; i++)
#synchronized (arr) {
[arr replaceObjectAtIndex:0 withObject:[NSNumber numberWithInt:1]];
[arr replaceObjectAtIndex:0 withObject:[NSNumber numberWithInt:2]];
[arr replaceObjectAtIndex:0 withObject:[NSNumber numberWithInt:3]];
[arr replaceObjectAtIndex:0 withObject:[NSNumber numberWithInt:4]];
}
});
}
- (void)usingGCD
{
dispatch_queue_t writeQyeue = dispatch_queue_create("com.tikhop.writeQyeue", DISPATCH_QUEUE_CONCURRENT);
dispatch_sync(writeQyeue, ^{
for(size_t i=0; i<10000; i++)
dispatch_barrier_async(_queue, ^{
[arr replaceObjectAtIndex:0 withObject:[NSNumber numberWithInt:5]];
[arr replaceObjectAtIndex:0 withObject:[NSNumber numberWithInt:6]];
[arr replaceObjectAtIndex:0 withObject:[NSNumber numberWithInt:7]];
[arr replaceObjectAtIndex:0 withObject:[NSNumber numberWithInt:8]];
});
});
}
arr = [NSMutableArray arrayWithCapacity:1];
[arr addObject:#(0)];
[self usingSynchronized];
[self usingGCD];
I got the following result:

You cannot assume that any operation on NSDictionary is thread-safe. And almost all f them are not. You really need to set up a mutex, #synchronize access to your array or use a gcd serial queue for access.

dictionaryWithDictionary: is internally enumerating the argument, so you are basically mutating while enumerating.
Also, in general, you should never write to an object if another thread is going to access it in any way unless you use some sort of synchronization primitive.
Your reasoning that it "reads" whatever it's there at the moment is not valid in general. Here is a little more info on the problems inherent in multithreading Usage of registers by the compiler in multithreaded program

Related

Transforming a code to Grand Central Dispatch

I have an array of NSNumbers that have to pass thru 20 tests. If one test fails than the array is invalid if all tests pass than the array is valid. I am trying to do it in a way that as soon as the first failure happens it stops doing the remaining tests. If a failure happens on the 3rd test then stop evaluating other tests.
I am trying to convert the code I have that is serial processing, to parallel processing with grand central dispatch, but I cannot wrap my head around it.
This is what I have.
First the definition of the tests to be done. This array is used to run the tests.
Every individual test returns YES when it fails and NO when it is ok.
#define TESTS #[ \
#"averageNotOK:", \
#"numbersOverRange:", \
#"numbersUnderRange:",\
#"numbersForbidden:", \
// ... etc etc
#"numbersNotOnCurve:"]
- (BOOL) numbersPassedAllTests:(NSArray *)numbers {
NSInteger count = [TESTS count];
for (int i=0; i<count; i++) {
NSString *aMethodName = TESTS[i];
SEL selector = NSSelectorFromString(aMethodName);
BOOL failed = NO;
NSMethodSignature *signature = [[self class] instanceMethodSignatureForSelector:selector];
NSInvocation *invocation = [NSInvocation invocationWithMethodSignature:signature];
[invocation setSelector:selector];
[invocation setTarget:self];
[invocation setArgument:&numbers atIndex:2];
[invocation invoke];
[invocation getReturnValue:&failed];
if (failed) {
return NO;
}
}
return YES;
}
This work perfectly but perform the tests sequentially.
How do I do that perform these tests in parallel executing the less amount of tests as needed?
I assume you've spotted dispatch_apply which is the trivial parallel for. You've realised it can't do an early exit. Hence the question.
I'm afraid the answer is you'll need to do some bookkeeping for yourself, but luckily it shouldn't be too hard. To avoid repeating what you've got, pretend I'd turned the stuff inside your loop into:
BOOL failedTest(int);
So your serial loop looks like:
for (int i=0; i<count; i++) {
if(failedTest(i))
return NO;
}
return YES;
Then you might do:
#import <libkern/OSAtomic.h>
volatile __block int32_t hasFailed = 0;
dispatch_apply(
count,
dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0),
^(size_t i)
{
// do no computation if somebody else already failed
if(hasFailed) return;
if(failedTest(i))
OSAtomicIncrement32(&hasFailed);
});
return !hasFailed;
So it'll keep starting tests until one of them has previously failed. The OSAtomicIncrement32 just ensures atomicity without requiring a mutex. It'll usually turn into a cheap single instruction. You could get away with just using a BOOL as atomicity isn't really going to be a problem but why not just do it properly?
EDIT: also, you could just use #selector directly and create an array of selectors rather than using NSSelectorFromString with an array of strings, to save lookup time. If your tests are really cheap then consider doing them part serial, part parallel by having the dispatch_apply do, say, count/10 dispatches and having each dispatch do 10 tests. Otherwise GCD will just issue count instances of the block and issuing has an associated cost.

Cocos2d: troubles scheduling call a method multiple times at specific time intervals

I ran the following code expecting to schedule three subsequent calls, at different time intervals (e.g. after 1 sec, after 2.6sec etc..) on the method "displayWarningMessage" but didn't work (it displayed the massage only the first time).
I don't find a method signature in the scheduler that would do the job of displaying it multiple times and with a specific delay. Anyone has some suggestion?
[self scheduleOnce:#selector(displayWarningMessage) delay:0.7f];
[self scheduleOnce:#selector(displayWarningMessage) delay:1.7f];
[self scheduleOnce:#selector(displayWarningMessage) delay:3.7f];
Problem here is, when you call first schedule it is scheduled successfully. But the next immediate call is throwing warning something
CCScheduler#scheduleSelector. Selector already scheduled. Updating interval from: X.2 to X.2
you can see this in the log.
What you can do is when the selector is called, at the end of the method you can schedule it again for the next time, until you are done. You may take a counter to keep track of how many times it has been called, put all of your intervals in an array and then schedule next selector for the interval at the specific index identified by counter. like this:
NSArray *intervals = [NSArray arrayWithObjects:[NSNumber numberWithFloat:0.7],[NSNumber numberWithFloat:1.7],[NSNumber numberWithFloat:3.7], nil];
int counter = 0;
//schedule it for the first time with object at index counter/index 0
[self scheduleOnce:#selector(displayWarningMessage) delay:[(NSNumber *)[intervals objectAtIndex:counter]] floatValue];
now in your selector, do something like this:
-(void)displayWarningMessage
{
//do all your stuff here
//increment counter
counter ++;
if(counter < [intervals count])
{
//schedule it for the next time with object at index counter/index
[self scheduleOnce:#selector(displayWarningMessage) delay:[(NSNumber *)[intervals objectAtIndex:counter]] floatValue];
}
}
intervals and counter should be class ivars of-course.
Try this:
- (void)displayWarningMessage {
//Stuff
}
- (void)callStuff {
CCCallFunc *call = [CCCallFunc actionWithTarget:self selector:#selector(displayWarningMessage)];
CCDelayTime *delay1 = [CCDelayTime actionWithDuration:0.7f];
CCDelayTime *delay2 = [CCDelayTime actionWithDuration:1.7f];
CCDelayTime *delay3 = [CCDelayTime actionWithDuration:3.7f];
CCSequence *actionToRun = [CCSequence actions:delay1, call, delay2, call, delay3, call, nil];
[self runAction:actionToRun];
}
That should work for what you're trying to do, at least that's how I'd imagine doing it. I'm fairly sure you can call that CCCallFunc multiple times in one CCSequence without having to create it three individual times. You could also make those delays variable based if need be, of course. Let me know how it goes.
Method is created.
[self schedule: #selector(displayWarningMessage:) interval:3.2f];
-(void) displayWarningMessage:(ccTime) delta
{
CCLOG(#"alert........!!!!!!");
}
Use the Calling method in not warning message detected.

IOBluetooth Synchronous Reads

Right now I'm working on a program using IOBluetooth and I need to have synchronous reads, i.e. I call a method, it writes a given number of bytes to the port, then reads a given number and returns them. I currently have a complex system of NSThreads, NSLocks, and NSConditions that, while it sort of works, is very slow. Also, after certain calls, I need to make sure there's no extra data, so I'd normally flush the buffer, but with IOBluetooth's asynchronous callback that's not possible - any thoughts on how to make sure that no matter what, all data received after a specific point is data that's received after that point?
I really haven't dealt at all with synchronization and multithreading of this type, since all the work I've done so far is using synchronous calls, so I'd appreciate any thoughts on the matter.
Here's the callback for incoming data (the "incomingData" object is NSMutableData):
- (void)rfcommChannelData:(IOBluetoothRFCOMMChannel*)rfcommChannel data:(void *)dataPointer length:(size_t)dataLength {
[dataLock lock];
NSData *data = [NSData dataWithBytes:dataPointer length:dataLength];
[incomingData appendData:data];
if (dataWaitCondition && [incomingData length] >= bytesToWaitFor) {
[dataWaitCondition signal];
}
[dataLock unlock];
[delegate bluetoothDataReceived];
}
And here's the method that waits until the given number of bytes has been received before returning the data object (this gets called from an alternate thread).
- (NSData *)waitForBytes:(int)numberOfBytes {
bytesToWaitFor = numberOfBytes;
[dataLock lock];
dataWaitCondition = [[NSCondition alloc] init];
[dataWaitCondition lock];
[dataLock unlock];
while ([incomingData length] < numberOfBytes) {
[dataWaitCondition wait];
}
[dataLock lock];
NSData *data = [incomingData copy];
[dataWaitCondition unlock];
dataWaitCondition = NULL;
[dataLock unlock];
return data;
}
Doing any kind of IO/communication in a synchronous way will get you into trouble.
You can avoid multi threading and locks by using a simple state machine for your application logic. Whenever data is received, the state machine is triggered and can process the data. If all data is there, you can do the next step in your app. You can use synchronous calls for sending if you like as it will just drop the data with the Bluetooth system.

Is there a more memory efficient way to search through a Core Data database?

I need to see if an object that I have obtained from a CSV file with a unique identifier exists in my Core Data Database, and this is the code I deemed suitable for this task:
NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init];
NSEntityDescription *entity;
entity =
[NSEntityDescription entityForName:#"ICD9"
inManagedObjectContext:passedContext];
[fetchRequest setEntity:entity];
NSPredicate *pred = [NSPredicate predicateWithFormat:#"uniqueID like %#", uniqueIdentifier];
[fetchRequest setPredicate:pred];
NSError *err;
NSArray* icd9s = [passedContext executeFetchRequest:fetchRequest error:&err];
[fetchRequest release];
if ([icd9s count] > 0) {
for (int i = 0; i < [icd9s count]; i++) {
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc]init];
NSString *name = [[icd9s objectAtIndex:i] valueForKey:#"uniqueID"];
if ([name caseInsensitiveCompare:uniqueIdentifier] == NSOrderedSame && name != nil)
{
[pool release];
return [icd9s objectAtIndex:i];
}
[pool release];
}
}
return nil;
After more thorough testing it appears that this code is responsible for a huge amount of leaking in the app I'm writing (it crashes on a 3GS before making it 20 percent through the 1459 items). I feel like this isn't the most efficient way to do this, any suggestions for a more memory efficient way? Thanks in advance!
Don't use the like operator in your request predicate. Use =. That should be much faster.
You can specify the case insensitivity of the search via the predicate, using the [c] modifier.
It's not necessary to create and destroy an NSAutoreleasePool on each iteration of your loop. In fact, it's probably not needed at all.
You don't need to do any of the checking inside the for() loop. You're duplicating the work of your predicate.
So I would change your code to be:
NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init];
[fetchRequest setEntity:...];
[fetchRequest setPredicate:[NSPredicate predicateWithFormat:#"uniqueID =[c] %#", uniqueIdentifier]];
NSError *err = nil;
NSArray *icd9s = [passedContext executeFetchRequest:fetchRequest error:&err];
[fetchRequest release];
if (error == nil && [icd9s count] > 0) {
return [icd9s objectAtIndex:0]; //we know the uniqueID matches, because of the predicate
}
return nil;
Use the Leaks template in Instruments to hunt down the leak(s). Your current code may be just fine once you fix them. The leak(s) may even be somewhere other than code.
Other problems:
Using fast enumeration will make the loop over the array (1) faster and (2) much easier to read.
Don't send release to an autorelease pool. If you ever port the code to garbage-collected Cocoa, the pool will not do anything. Instead, send it drain; in retain-release Cocoa and in Cocoa Touch, this works the same as release, and in garbage-collected Cocoa, it pokes the garbage collector, which is the closest equivalent in GC-land to draining the pool.
Don't repeat yourself. You currently have two [pool release]; lines for one pool, which gets every experienced Cocoa and Cocoa Touch programmer really worried. Store the result of your tests upon the name in a Boolean variable, then drain the pool before the condition, then conditionally return the object.
Be careful with variable types. -[NSArray count] returns and -[NSArray objectAtIndex:] takes an NSUInteger, not an int. Try to keep all your types matching at all times. (Switching to fast enumeration will, of course, solve this instance of this problem in a different way.)
Don't hide releases. I almost accused you of leaking the fetch request, then noticed that you'd buried it in the middle of the code. Make your releases prominent so that you're less likely to accidentally add redundant (i.e., crash-inducing) ones.

using dispatch_sync in Grand Central Dispatch

Can anyone explain with really clear use cases what the purpose of dispatch_sync in GCD is for? I can't understand where and why I would have to use this.
Thanks!
You use it when you want to execute a block and wait for the results.
One example of this is the pattern where you're using a dispatch queue instead of locks for synchronization. For example, assume you have a shared NSMutableArray a, with access mediated by dispatch queue q. A background thread might be appending to the array (async), while your foreground thread is pulling the first item off (synchronously):
NSMutableArray *a = [[NSMutableArray alloc] init];
// All access to `a` is via this dispatch queue!
dispatch_queue_t q = dispatch_queue_create("com.foo.samplequeue", NULL);
dispatch_async(q, ^{ [a addObject:something]; }); // append to array, non-blocking
__block Something *first = nil; // "__block" to make results from block available
dispatch_sync(q, ^{ // note that these 3 statements...
if ([a count] > 0) { // ...are all executed together...
first = [a objectAtIndex:0]; // ...as part of a single block...
[a removeObjectAtIndex:0]; // ...to ensure consistent results
}
});
First understand its brother dispatch_async
//Do something
dispatch_async(queue, ^{
//Do something else
});
//Do More Stuff
You use dispatch_async to create a new thread. When you do that, the current thread will not stop. That means //Do More Stuff may be executed before //Do something else finish
What happens if you want the current thread to stop?
You do not use dispatch at all. Just write the code normally
//Do something
//Do something else
//Do More Stuff
Now, say you want to do something on a DIFFERENT thread and yet wait as if and ensure that stuffs are done consecutively.
There are many reason to do this. UI update, for example, is done on main thread.
That's where you use dispatch_sync
//Do something
dispatch_sync(queue, ^{
//Do something else
});
//Do More Stuff
Here you got //Do something //Do something else and //Do More stuff done consecutively even though //Do something else is done on a different thread.
Usually, when people use different thread, the whole purpose is so that something can get executed without waiting. Say you want to download large amount of data but you want to keep the UI smooth.
Hence, dispatch_sync is rarely used. But it's there. I personally never used that. Why not ask for some sample code or project that does use dispatch_sync.
dispatch_sync is semantically equivalent to a traditional mutex lock.
dispatch_sync(queue, ^{
//access shared resource
});
works the same as
pthread_mutex_lock(&lock);
//access shared resource
pthread_mutex_unlock(&lock);
David Gelhar left unsaid that his example will work only because he quietly created serial queue (passed NULL in dispatch_queue_create what is equal to DISPATCH_QUEUE_SERIAL).
If you wish create concurrent queue (to gain all of multithread power), his code will lead to crash because of NSArray mutation (addObject:) during mutation (removeObjectAtIndex:) or even bad access (NSArray range beyond bounds). In that case we should use barrier to ensure exclusive access to the NSArray while the both blocks run. Not only does it exclude all other writes to the NSArray while it runs, but it also excludes all other reads, making the modification safe.
Example for concurrent queue should look like this:
NSMutableArray *a = [[NSMutableArray alloc] init];
// All access to `a` is via this concurrent dispatch queue!
dispatch_queue_t q = dispatch_queue_create("com.foo.samplequeue", DISPATCH_QUEUE_CONCURRENT);
// append to array concurrently but safely and don't wait for block completion
dispatch_barrier_async(q, ^{ [a addObject:something]; });
__block Something *first = nil;
// pop 'Something first' from array concurrently and safely but wait for block completion...
dispatch_barrier_sync(q, ^{
if ([a count] > 0) {
first = [a objectAtIndex:0];
[a removeObjectAtIndex:0];
}
});
// ... then here you get your 'first = [a objectAtIndex:0];' due to synchronised dispatch.
// If you use async instead of sync here, then first will be nil.
If you want some samples of practical use look at this question of mine:
How do I resolve this deadlock that happen ocassionally?
I solve it by ensuring that my main managedObjectContext is created on the main thread. The process is very fast and I do not mind waiting. Not waiting means I will have to deal with a lot of concurency issue.
I need dispatch_sync because some code need to be done on main thread, which is the different thread than the one where to code is being executed.
So basically if you want the code to
1. Proceed like usual. You don't want to worry about race conditions. You want to ensure that the code is completed before moving on.
2. Done on a different thread
use dispatch_sync.
If 1 is violated, use dispatch_async. If 2 is violated just write the code like usual.
So far, I only do this once, namely when something need to be done on main thread.
So here's the code:
+(NSManagedObjectContext *)managedObjectContext {
NSThread *thread = [NSThread currentThread];
//BadgerNewAppDelegate *delegate = [BNUtilitiesQuick appDelegate];
//NSManagedObjectContext *moc = delegate.managedObjectContext;
if ([thread isMainThread]) {
//NSManagedObjectContext *moc = [self managedObjectContextMainThread];
return [self managedObjectContextMainThread];
}
else{
dispatch_sync(dispatch_get_main_queue(),^{
[self managedObjectContextMainThread];//Access it once to make sure it's there
});
}
// a key to cache the context for the given thread
NSMutableDictionary *managedObjectContexts =[self thread].managedObjectContexts;
#synchronized(self)
{
if ([managedObjectContexts objectForKey:[self threadKey]] == nil ) {
NSManagedObjectContext *threadContext = [[NSManagedObjectContext alloc] initWithConcurrencyType:NSPrivateQueueConcurrencyType];
threadContext.parentContext = [self managedObjectContextMainThread];
//threadContext.persistentStoreCoordinator= [self persistentStoreCoordinator]; //moc.persistentStoreCoordinator;// [moc persistentStoreCoordinator];
threadContext.mergePolicy = NSMergeByPropertyObjectTrumpMergePolicy;
[managedObjectContexts setObject:threadContext forKey:[self threadKey]];
}
}
return [managedObjectContexts objectForKey:[self threadKey]];
}
dispatch_sync is mainly used inside dispatch_async block to perform some operations on main thread(like update ui).
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
//Update UI in main thread
dispatch_sync(dispatch_get_main_queue(), ^{
self.view.backgroundColor = color;
});
});
Here's a half-way realistic example. You have 2000 zip files that you want to analyze in parallel. But the zip library isn't thread-safe. Therefore, all work that touches the zip library goes into the unzipQueue queue. (The example is in Ruby, but all calls map directly to the C library. "apply", for example, maps to dispatch_apply(3))
#!/usr/bin/env macruby -w
require 'rubygems'
require 'zip/zipfilesystem'
#unzipQueue = Dispatch::Queue.new('ch.unibe.niko.unzipQueue')
def extractFile(n)
#unzipQueue.sync do
Zip::ZipFile.open("Quelltext.zip") { |zipfile|
sourceCode = zipfile.file.read("graph.php")
}
end
end
Dispatch::Queue.concurrent.apply(2000) do |i|
puts i if i % 200 == 0
extractFile(i)
end
I've used dispatch sync when inside an async dispatch to signal UI changes back to the main thread.
My async block holds back only a little and I know the main thread is aware of the UI changes and will action them. Generally used this in a processing block of code that takes some CPU time but I still want to action UI changes from within that block. Actioning the UI changes in the async block is useless as UI, I believe, runs on the main thread. Also actioning them as secondary async blocks, or a self delegate, results in the UI only seeing them a few seconds later and it looks tardy.
Example block:
dispatch_queue_t myQueue = dispatch_queue_create("my.dispatch.q", 0);
dispatch_async(myQueue,
^{
// Do some nasty CPU intensive processing, load file whatever
if (somecondition in the nasty CPU processing stuff)
{
// Do stuff
dispatch_sync(dispatch_get_main_queue(),^{/* Do Stuff that affects UI Here */});
}
});

Resources