NSScreen get the projector/TV Out/AirPlay screen? - macos

What is the best way to get the NSScreen instance that is most likely be a projector or AirPlay display? (or even TV-Out?) I'm writing a presentation software and will need to know which screen that most likely represents the "presentation" screen.
Some options came to mind:
A. Use the second instance if there's any. Of course this will obviously won't give good results if there are more than two screens attached.
NSScreen* projectorScreen = [NSScreen mainScreen];
NSArray* screens = [NSScreen screens];
if(screens.count > 1) {
projectorScreen = screens[1];
}
B. Use the first screen if it's not the main screen. The reason behind it is that in cases of mirroring, the first screen should be the one with the highest pixel depth.
NSScreen* projectorScreen = [NSScreen mainScreen];
NSArray* screens = [NSScreen screens];
if(screens.count > 1) {
if(screens[0] != projectorScreen) {
projectorScreen = screens[0];
}
}
C. Use the lowest screen that is not the main screen. The reason is just to choose any screen that is not the main screen.
NSScreen* projectorScreen = [NSScreen mainScreen];
NSArray* screens = [NSScreen screens];
if(screens.count > 1) {
for(NSScreen* screen in screens) {
if(screen != projectorScreen) {
projectorScreen = screen;
break;
}
}
}
D. Use NSScreen's deviceDescription dictionary and find the biggest screen in real-world coordinates. That is divide NSDeviceSize's width and height with NSDeviceResolution and theoretically this should yield an area in square inches. However I'm not fully convinced that the OS knows the real-world size of each screen.
Any other suggestions?
Granted there isn't any 100% correct heuristics, but then again, picking the correct screen for most of the time should be sufficient.

Looks like option (D) is the best, with some changes. Apparently OS X has a pretty good idea about the real-world size of the display and you can get it via CGDisplayScreenSize. It's then pretty straightforward to pick the largest one and assume that's the presentation screen.
Granted this doesn't accurately measure projectors, but my informal testing shows that the function returns pretty good pixel per inch values for each screen:
Macbook Air 13": {290, 180} mm, 126 ppi
Apple Cinema Display: {596, 336} mm, 109 ppi
An Epson Overhead Projector: {799, 450} mm, 61 ppi
(the above were converted with a constant 25.4 millimeters per inch).
Here's the code that I used:
#import <Foundation/Foundation.h>
#import <AppKit/AppKit.h>
#import <ApplicationServices/ApplicationServices.h>
int main(int argc, char *argv[]) {
#autoreleasepool {
NSArray* screens = [NSScreen screens];
CGFloat __block biggestArea = 0;
NSScreen* __block presentationScreen;
NSUInteger __block presentationScreenIndex;
[screens enumerateObjectsUsingBlock:^(NSScreen* screen,NSUInteger idx,BOOL* stop){
NSDictionary *description = [screen deviceDescription];
NSSize displayPixelSize = [[description objectForKey:NSDeviceSize] sizeValue];
CGSize displayPhysicalSize = CGDisplayScreenSize(
[[description objectForKey:#"NSScreenNumber"] unsignedIntValue]);
NSLog(#"Screen %d Physical Size: %# ppi is %0.2f",(int) idx, NSStringFromSize(displayPhysicalSize),
(displayPixelSize.width / displayPhysicalSize.width) * 25.4f);
// there being 25.4 mm in an inch
CGFloat screenArea = displayPhysicalSize.width * displayPhysicalSize.height;
if(screenArea > biggestArea) {
presentationScreen = screen;
biggestArea = screenArea;
presentationScreenIndex = idx;
}
}];
NSLog(#"Presentation screen: index: %d %#",(int) presentationScreenIndex,presentationScreen);
}
}

Related

NSScreen visibleFrame only accounting for menu bar area on main screen

I noticed that NSScreen's visibleFrame method isn't subtracting the menu bar dimensions on my non-main screens. Say I have the following code:
DB("Cocoa NSScreen rects:");
NSArray *screens = [NSScreen screens];
for(NSUInteger i = 0; i < [screens count]; ++i) {
NSScreen *screen = [screens objectAtIndex:i];
CGRect r = [screen visibleFrame];
const char *suffix = "";
if(screen == [NSScreen mainScreen])
suffix = " (main screen)";
DB(" %lu. (%.2f, %.2f) + (%.2f x %.2f)%s", (unsigned long)i, r.origin.x, r.origin.y, r.size.width, r.size.height, suffix);
}
I run it on my Mac, which has a menu bar on every monitor. I then get the following output:
Cocoa NSScreen rects:
0. (4.00, 0.00) + (1276.00 x 777.00) (main screen)
1. (3200.00, 9.00) + (1200.00 x 1920.00)
2. (1280.00, 800.00) + (1920.00 x 1200.00)
The size of the menu bar and (hidden) dock appears to have been correctly subtracted from the main screen's visible area - but the menu bars on my external monitors have not been accounted for! (Assuming the menu bar is 23 pixels high on every screen - so I would expect screen 1 to be something like 1200x1897 and screen 2 to be around 1920x1877.)
Aside from wondering how big the screen is - and there you'll just have to trust me, I'm afraid! - what am I doing wrong? How do I get accurate screen bounds?
(OS X Yosemite 10.10.3)
Until the program creates an NSWindow - which this program, as far as I can tell, never does - the reported screen bounds appear to be inaccurate. So, before the program fetches the screen bounds, it now runs this bit of code:
if(!ever_created_hack_window) {
NSWindow *window = [[NSWindow alloc] initWithContentRect:NSMakeRect(0,0,100,100)
styleMask:NSTitledWindowMask
backing:NSBackingStoreBuffered
defer:YES
screen:nil];
[window release];
window = nil;
ever_created_hack_window = YES;
}
(ever_created_hack_window is just a global BOOL.)
Once this has been done, I get the screen dimensions I expect:
0. (4.00, 0.00) + (1276.00 x 777.00) (main screen)
1. (3200.00, 9.00) + (1200.00 x 1897.00)
2. (1280.00, 800.00) + (1920.00 x 1177.00)
Additionally, it now correctly picks up changes in main screen.
(This could be stuff that is set up by calling UIApplicationMain. This program doesn't do that, either.)

NSSplitView: Controlling divider position during window resize

I have an NSSplitView that's having two panes - a sidebar table view on the left and a web view on the right one. I also have a delegate set that's handling constraints for the sidebar like this:
- (CGFloat)splitView:(NSSplitView *)splitView constrainMaxCoordinate:(CGFloat)proposedMax ofSubviewAt:(NSInteger)dividerIndex {
return 500.0f;
}
- (CGFloat)splitView:(NSSplitView *)splitView constrainMinCoordinate:(CGFloat)proposedMinimumPosition ofSubviewAt:(NSInteger)dividerIndex {
return 175.0f;
}
- (BOOL)splitView:(NSSplitView *)splitView canCollapseSubview:(NSView *)subview {
return NO;
}
It means that the sidebar can only be resized between 175 and 500 pixels and this works fine when using the divider handle. But when resizing the whole window the divider gets repositioned out of these constraints.
Does anybody know how to control this?
Additionally: If I want to store the user's choice of sidebar width, is it a good thought to read it out, save it to a preferences file and restore it later, or is there a more straight-forward way to do this? I noticed that the window's state gets saved in some cases - is this generally happening or do I have to control it?
Thanks in advance
Arne
I initially implemented the NSSplitView delegate functions and ended up with a lot of code to try to do something so simple as limit the minimum size for each of the split view sides.
I then changed my approach and found a clean and extremely simply solution. I simply set a auto layout constant for a width (>= to my desired minimum size) on the NSView for one side of the NSSplitView. I did the same on my other side. With these two simple constraints the NSSplitView worked perfectly without the need for delegate calls.
What you are looking for is:
- (void)splitView:(NSSplitView*)sender resizeSubviewsWithOldSize:(NSSize)oldSize
[sender frame] will be the new size of your NSSplitView after the resize. Then just readjust your subviews accordingly.
The problem is that when the NSSplitView itself is resized, -adjustSubviews gets called to do the work, but it plain ignores the min/max constraints from the delegate!
However -setPosition:ofDividerAtIndex: does take the constraints into account.
All you need to do is combine both - this example assumes an NSSplitView with only 2 views:
- (CGFloat)splitView:(NSSplitView*)splitView constrainMinCoordinate:(CGFloat)proposedMinimumPosition ofSubviewAt:(NSInteger)dividerIndex {
return 300;
}
- (CGFloat)splitView:(NSSplitView*)splitView constrainMaxCoordinate:(CGFloat)proposedMaximumPosition ofSubviewAt:(NSInteger)dividerIndex {
return (splitView.vertical ? splitView.bounds.size.width : splitView.bounds.size.height) - 500;
}
- (void)splitView:(NSSplitView*)splitView resizeSubviewsWithOldSize:(NSSize)oldSize {
[splitView adjustSubviews]; // Use default resizing behavior from NSSplitView
NSView* view = splitView.subviews.firstObject;
[splitView setPosition:(splitView.vertical ? view.frame.size.width : view.frame.size.height) ofDividerAtIndex:0]; // Force-apply constraints afterwards
}
This appears to work fine on OS X 10.8, 10.9 and 10.10, and is much cleaner than the other approaches as it's minimal code and the constraints are not duplicated.
An alternative way to solve this is using splitView:shouldAdjustSizeOfSubview:
I've found this much simpler for my purposes.
For example, if you want to prevent the sidebarTableView from ever being smaller than your 175 minimum width then you can do something like this (assuming you made sidebarTableView an outlet on your view controller/delegate);
- (BOOL)splitView:(NSSplitView *)splitView shouldAdjustSizeOfSubview:(NSView *)subview
{
if ((subview==self.sidebarTableView) && subview.bounds.size.width<=175) {
return NO;
}
return YES;
}
Here's my implementation of -splitView:resizeSubviewsWithOldSize::
-(void)splitView:(NSSplitView *)splitView resizeSubviewsWithOldSize:(NSSize)oldSize {
if (![splitView isSubviewCollapsed:self.rightView] &&
self.rightView.frame.size.width < 275.0f + DBL_EPSILON) {
NSSize splitViewFrameSize = splitView.frame.size;
CGFloat leftViewWidth = splitViewFrameSize.width - 275.0f - splitView.dividerThickness;
self.leftView.frameSize = NSMakeSize(leftViewWidth,
splitViewFrameSize.height);
self.rightView.frame = NSMakeRect(leftViewWidth + splitView.dividerThickness,
0.0f,
275.0,
splitViewFrameSize.height);
} else
[splitView adjustSubviews];
}
In my case, rightView is the second of two subviews, which is collapsible with a minimum width of 275.0. leftView has no minimum or maximum and is not collapsible.
I used
- (void)splitView:(NSSplitView*)sender resizeSubviewsWithOldSize:(NSSize)oldSize
but instead of changing the subview frame, I used
[sender setPosition:360 ofDividerAtIndex:0]; //or whatever your index and position should be
Changing the frame didn't give me consistent results. Setting the position of the divider did.
Maybe too late for the party, however this is my implementation of resizeSubviewWithOldSize:. In my project I need a vertical resizable NSplitView with leftView width between 100.0 and 300.0; no 'Collapsing' taken in account. You should take care of all possible dimensions for the subviews.
-(void)splitView:(NSSplitView *)splitView resizeSubviewsWithOldSize:(NSSize)oldSize {
if (self.leftView.frame.size.width >= kMaxLeftWidth) {
NSSize splitViewFrameSize = splitView.frame.size;
CGFloat leftViewWidth = kMaxLeftWidth;
CGFloat rightViewWidth = splitViewFrameSize.width - leftViewWidth - splitView.dividerThickness;
self.leftView.frameSize = NSMakeSize(leftViewWidth,
splitViewFrameSize.height);
self.rightView.frame = NSMakeRect(leftViewWidth + splitView.dividerThickness,
0.0f,
rightViewWidth,
splitViewFrameSize.height);
} else if (self.leftView.frame.size.width <= kMinLeftWidth) {
NSSize splitViewFrameSize = splitView.frame.size;
CGFloat leftViewWidth = kMinLeftWidth;
CGFloat rightViewWidth = splitViewFrameSize.width - leftViewWidth - splitView.dividerThickness;
self.leftView.frameSize = NSMakeSize(leftViewWidth,
splitViewFrameSize.height);
self.rightView.frame = NSMakeRect(leftViewWidth + splitView.dividerThickness,
0.0f,
rightViewWidth,
splitViewFrameSize.height);
} else {
NSSize splitViewFrameSize = splitView.frame.size;
CGFloat leftViewWidth = self.leftView.frame.size.width;
CGFloat rightViewWidth = splitViewFrameSize.width - leftViewWidth - splitView.dividerThickness;
self.leftView.frameSize = NSMakeSize(leftViewWidth,
splitViewFrameSize.height);
self.rightView.frame = NSMakeRect(leftViewWidth + splitView.dividerThickness,
0.0f,
rightViewWidth,
splitViewFrameSize.height);
}
}
I've achieved this behavior by setting the holdingPriority on NSSplitViewItem to Required(1000) for the fixed side in Interface Builder. You can then control the width for the fixed side by setting a constraint on the underlying NSView.
I just needed to do this, and came up with this method which is a bit simpler than previous examples. This code assumes there are IBOutlets for the left and right NSScrollViews of the NSSplitView container, as well as CGFloat constants for the minimum size of the left and right views.
#pragma mark - NSSplitView sizing override
//------------------------------------------------------------------------------
// This is implemented to ensure that when the window is resized that our main table
// remains at it's smallest size or larger (the default proportional sizing done by
// -adjustSubviews would size it smaller w/o -constrainMinCoordiante being called).
- (void) splitView: (NSSplitView *)inSplitView resizeSubviewsWithOldSize: (NSSize)oldSize
{
// First, let the default proportional adjustments take place
[inSplitView adjustSubviews];
// Then ensure that our views are at least their min size
// *** WARNING: this does not handle allowing the window to be made smaller than the total of the two views!
// Gather current sizes
NSSize leftViewSize = self.leftSideView.frame.size;
NSSize rightViewSize = self.rightSideView.frame.size;
NSSize splitViewSize = inSplitView.frame.size;
CGFloat dividerWidth = inSplitView.dividerThickness;
// Assume we don't have to resize anything
CGFloat newLeftWidth = 0.0f;
// Always adjust the left view first if we need to change either view's size
if( leftViewSize.width < kLeftSplitViewMinSize )
{
newLeftWidth = kLeftSplitViewMinSize;
}
else if( rightViewSize.width < kRightSplitViewMinSize )
{
newLeftWidth = splitViewSize.width - (kRightSplitViewMinSize + dividerWidth);
}
// Do we need to adjust the size?
if( newLeftWidth > 0.0f )
{
// Yes, do so by setting the left view and setting the right view to the space left over
leftViewSize.width = newLeftWidth;
rightViewSize.width = splitViewSize.width - (newLeftWidth + dividerWidth);
// We also need to set the origin of the right view correctly
NSPoint origin = self.rightSideView.frame.origin;
origin.x = splitViewSize.width - rightViewSize.width;
[self.rightSideView setFrameOrigin: origin];
// Set the the ajusted view sizes
leftViewSize.height = rightViewSize.height = splitViewSize.height;
[self.leftSideView setFrameSize: leftViewSize];
[self.rightSideView setFrameSize: rightViewSize];
}
}

Why is my QTKit based image encoding application so slow?

in a cocoa application I'm currently coding, I'm getting snapshot images from a Quartz Composer renderer (NSImage objects) and I would like to encode them in a QTMovie in 720*480 size, 25 fps, and H264 codec using the addImage: method. Here is the corresponding piece of code:
qRenderer = [[QCRenderer alloc] initOffScreenWithSize:NSMakeSize(720,480) colorSpace:CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB) composition:[QCComposition compositionWithFile:qcPatchPath]]; // define an "offscreen" Quartz composition renderer with the right image size
imageAttrs = [NSDictionary dictionaryWithObjectsAndKeys: #"avc1", // use the H264 codec
QTAddImageCodecType, nil];
qtMovie = [[QTMovie alloc] initToWritableFile: outputVideoFile error:NULL]; // initialize the output QT movie object
long fps = 25;
frameNum = 0;
NSTimeInterval renderingTime = 0;
NSTimeInterval frameInc = (1./fps);
NSTimeInterval myMovieDuration = 70;
NSImage * myImage;
while (renderingTime <= myMovieDuration){
if(![qRenderer renderAtTime: renderingTime arguments:NULL])
NSLog(#"Rendering failed at time %.3fs", renderingTime);
myImage = [qRenderer snapshotImage];
[qtMovie addImage:myImage forDuration: QTMakeTimeWithTimeInterval(frameInc) withAttributes:imageAttrs];
[myImage release];
frameNum ++;
renderingTime = frameNum * frameInc;
}
[qtMovie updateMovieFile];
[qRenderer release];
[qtMovie release];
It works, however my application is not able to do that in real time on my new MacBook Pro, while I know that QuickTime Broadcaster can encode images in real time in H264 with an even higher quality that the one I use, on the same computer.
So why ? What's the issue here? Is this a hardware management issue (multi-core threading, GPU,...) or am I missing something? Let me preface that I'm new (2 weeks of practice) in the Apple development world, both in objective-C, cocoa, X-code, Quicktime and Quartz Composer libraries, etc.
Thanks for any help
AVFoundation is a more efficient way to render a QuartzComposer animation to an H.264 video stream.
size_t width = 640;
size_t height = 480;
const char *outputFile = "/tmp/Arabesque.mp4";
QCComposition *composition = [QCComposition compositionWithFile:#"/System/Library/Screen Savers/Arabesque.qtz"];
QCRenderer *renderer = [[QCRenderer alloc] initOffScreenWithSize:NSMakeSize(width, height)
colorSpace:CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB) composition:composition];
unlink(outputFile);
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:#(outputFile)] fileType:AVFileTypeMPEG4 error:NULL];
NSDictionary *videoSettings = #{ AVVideoCodecKey : AVVideoCodecH264, AVVideoWidthKey : #(width), AVVideoHeightKey : #(height) };
AVAssetWriterInput* writerInput = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];
[videoWriter addInput:writerInput];
[writerInput release];
AVAssetWriterInputPixelBufferAdaptor *pixelBufferAdaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput sourcePixelBufferAttributes:NULL];
int framesPerSecond = 30;
int totalDuration = 30;
int totalFrameCount = framesPerSecond * totalDuration;
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:kCMTimeZero];
__block long frameNumber = 0;
dispatch_queue_t workQueue = dispatch_queue_create("com.example.work-queue", DISPATCH_QUEUE_SERIAL);
NSLog(#"Starting.");
[writerInput requestMediaDataWhenReadyOnQueue:workQueue usingBlock:^{
while ([writerInput isReadyForMoreMediaData]) {
NSTimeInterval frameTime = (float)frameNumber / framesPerSecond;
if (![renderer renderAtTime:frameTime arguments:NULL]) {
NSLog(#"Rendering failed at time %.3fs", frameTime);
break;
}
CVPixelBufferRef frame = (CVPixelBufferRef)[renderer createSnapshotImageOfType:#"CVPixelBuffer"];
[pixelBufferAdaptor appendPixelBuffer:frame withPresentationTime:CMTimeMake(frameNumber, framesPerSecond)];
CFRelease(frame);
frameNumber++;
if (frameNumber >= totalFrameCount) {
[writerInput markAsFinished];
[videoWriter finishWriting];
[videoWriter release];
[renderer release];
NSLog(#"Rendered %ld frames.", frameNumber);
break;
}
}
}];
In my testing this is around twice as fast as your posted code that uses QTKit. The biggest improvement appears to come from the H.264 encoding being handed off to the GPU rather than being performed in software. From a quick glance at a profile it appears that the remaining bottlenecks are the rendering of the composition itself, and reading the rendered data back from the GPU in to a pixel buffer. Obviously the complexity of your composition will have some impact on this.
It may be possible to further optimize this by using QCRenderer's ability to provide snapshots as CVOpenGLBufferRefs, which may keep the frame's data on the GPU rather than reading it back to hand it off to the encoder. I didn't look too far in to that though.

CALayer scroll view slowdown with many items

Hey, I'm having a performance problem with CALayers in a layer backed NSView inside an NSScrollView. I've got a scroll view that I populate with a bunch of CALayer instances, stacked one on top of the next. Right now all I am doing is putting a border around them so I can see them. There is nothing else in the layer.
Performance seems fine until I have around 1500 or so in the scroll view. When I put 1500 in, the performance is great as I scroll down the list, until I get to around item 1000. Then very suddenly the app starts to hang. It's not a gradual slowdown which is what I would expect if it was just reaching it's capacity. It's like the app hits a brick wall.
When I call CGContextFillRect in the draw method of the layers, the slowdown happens around item 300. I'm assuming this has something to do with maybe the video card memory filling up or something? Do I need to do something to free the resources of the CALayers when they are offscreen in my scroll view?
I've noticed that if I don't setNeedsDisplay on my layers, I can get to the end of 1500 items without slowdowns. This is not a solution however, as I have some custom drawing that I must perform in the layer. I'm not sure if that solves the problem, or just makes it show up with a greater number of items in the layer. Ideally I would like this to be fully scalable with thousands of items in the scroll view (within reason of course). Realistically, how many of these empty items should I expect to be able to display in this way?
#import "ShelfView.h"
#import <Quartz/Quartz.h>
#implementation ShelfView
- (void) awakeFromNib
{
CALayer *rootLayer = [CALayer layer];
rootLayer.layoutManager = self;
rootLayer.geometryFlipped = YES;
[self setLayer:rootLayer];
[self setWantsLayer:YES];
int numItemsOnShelf = 1500;
for(NSUInteger i = 0; i < numItemsOnShelf; i++) {
CALayer* shelfItem = [CALayer layer];
[shelfItem setBorderColor:CGColorCreateGenericRGB(1.0, 0.0, 0.0, 1.0)];
[shelfItem setBorderWidth:1];
[shelfItem setNeedsDisplay];
[rootLayer addSublayer:shelfItem];
}
[rootLayer setNeedsLayout];
}
- (void)layoutSublayersOfLayer:(CALayer *)layer
{
float y = 10;
int totalItems = (int)[[layer sublayers] count];
for(int i = 0; i < totalItems; i++)
{
CALayer* item = [[layer sublayers] objectAtIndex:i];
CGRect frame = [item frame];
frame.origin.x = self.frame.size.width / 2 - 200;
frame.origin.y = y;
frame.size.width = 400;
frame.size.height = 400;
[CATransaction begin];
[CATransaction setAnimationDuration:0.0];
[item setFrame:CGRectIntegral(frame)];
[CATransaction commit];
y += 410;
}
NSRect thisFrame = [self frame];
thisFrame.size.height = y;
if(thisFrame.size.height < self.superview.frame.size.height)
thisFrame.size.height = self.superview.frame.size.height;
[self setFrame:thisFrame];
}
- (BOOL) isFlipped
{
return YES;
}
#end
I found out it was because I was filling each layer with custom drawing, they seemed to all be cached as separate images, even though they shared a lot of common data, so I switched to just creating a dozen CALayers, filling their "contents" property, and adding them as sublayers to a main layer. This seemed to make things MUCH zippier.

How do I apply multiple color masks using CGImageCreateWithMaskingColors?

I'm a bit new to objective-c and even newer at programming with Quartz 2D, so apologies in advance! I've got a method where I would like to remove a handful of specific colors (not just one) from a UIImage.
When I run my project with just one color mask being applied, it works beautifully. Once I try stacking them, the 'whiteRef' comes out NULL. I've even tried modifying my method to take a color mask and then simply ran my method twice -feeding in the different colors masks- but still no go.
Any help with this is greatly appreciated!
- (UIImage *)doctorTheImage:(UIImage *)originalImage
{
const float brownsMask[6] = {124, 255, 68, 222, 0, 165};
const float whiteMask[6] = {255, 255, 255, 255, 255, 255};
UIImageView *imageView = [[UIImageView alloc] initWithImage:originalImage];
UIGraphicsBeginImageContext(originalImage.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGImageRef brownRef = CGImageCreateWithMaskingColors(imageView.image.CGImage, brownsMask);
CGImageRef whiteRef = CGImageCreateWithMaskingColors(brownRef, whiteMask);
CGContextDrawImage (context, CGRectMake(0, 0, imageView.image.size.width, imageView.image.size.height), whiteRef);
CGImageRelease(brownRef);
CGImageRelease(whiteRef);
UIImage *doctoredImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[imageView release];
return doctoredImage;
}
Well I found a good work around! Basically, I was ultimately using the image data over a MKMapView, so all I needed to do was break down the image to it's pixels and from there I could mess around as a I please. It may not be the best option in terms of speed, but does the trick. Here is a sample of the code I'm now using.
//Split the images into pixels to mess with the image pixel colors individually
size_t bufferLength = gridWidth * gridHeight * 4;
unsigned char *rawData = nil;
rawData = (unsigned char *)[self convertUIImageToBitmapRGBA8:myUIImage];
grid = malloc(sizeof(float)*(bufferLength/4));
NSString *hexColor = nil;
for (int i = 0 ; i < (bufferLength); i=i+4)
{
hexColor = [NSString stringWithFormat: #"%02x%02x%02x", (int)(rawData[i + 0]),(int)(rawData[i + 1]),(int)(rawData[i + 2])];
//mess with colors how you see fit - I just detected certain colors and slapped
//that into an array of floats which I later put over my mapview much like the
//hazardmap example from apple.
if ([hexColor isEqualToString:#"ff0299"]) //pink
value = (float)11;
if ([hexColor isEqualToString:#"9933cc"]) //purple
value = (float)12;
//etc...
grid[i/4] = value;
}
I also borrowed some methods (ie: convertUIImageToBitmapRGBA8) from here: https://gist.github.com/739132
Hope this helps someone out!

Resources