Xcode UI on background thread to render image - xcode

I'm rendering an image with text for one of my apps and has a noticeable impact on UI performance (can be as big as ~1 second freeze), so I am doing it on a background thread. Since the image has text, using UILabels and other UIViews makes it easy to lay everything out, and I render the view containing everything to an image.
However, I get a warning from Xcode saying that it's not allowed on the background thread because it uses UIKit. Why am I not allowed to call UIKit on the background thread even though my use case is completely self-contained and isolated from any rendering onscreen?
To help the code below make more sense, it draws an image that is a listing of several items, each of which consists of two small square images and the name of the item all in a row. The list can have several columns. The code has been tweaked slightly (mostly variable names) to avoid showing proprietary code, but does the same job.
My code:
NSArray<MyItem*>* items; // These are the items that I'm drawing. They
// get set before the following code is called.
// Processing code:
const CGFloat TITLE_FONT_SIZE = 50; // font size of the title
const CGFloat ITEM_FONT_SIZE = 25; // font size of the item names
const int OUTER_PADDING = 60; // padding from the edge of the image to the main content
const int ROW_PADDING = 13; // padding between rows
const int COL_PADDING = 100; // padding between columns
const int PADDING = 20; // padding between content items in a row
const int BOX_SIZE = 25; // how high/wide each image is
const int ROW_HEIGHT = BOX_SIZE; // pixel height of a line
const int COL_WIDTH = 500; // pixel width of a column (image1, image2, and name)
// compute the dimensions of the image
UILabel* titleLabel = [[UILabel alloc] init];
titleLabel.font = [UIFont systemFontOfSize:TITLE_FONT_SIZE];
titleLabel.text = #"My image";
[titleLabel sizeToFit];
titleLabel.frame = CGRectMake(OUTER_PADDING, OUTER_PADDING / 2, titleLabel.frame.size.width, titleLabel.frame.size.height);
const int MIN_NUM_COLS = 1 + ((titleLabel.frame.size.width - COL_WIDTH) / (COL_WIDTH + COL_PADDING));
const int NORMAL_NUM_COLS = (int)ceil(sqrt([items count] / (COL_WIDTH / (ROW_HEIGHT))));
const int NUM_COLS = (MIN_NUM_COLS > NORMAL_NUM_COLS ? MIN_NUM_COLS : NORMAL_NUM_COLS);
const int NUM_ROWS = (int)ceil([items count] / (float)NUM_COLS);
const int NUM_OVERFLOW_ROWS = [items count] % NUM_ROWS;
const int titleWidth = titleLabel.frame.size.width;
const int defaultWidth = (NUM_COLS * (COL_WIDTH + COL_PADDING)) - COL_PADDING;
const int pixelWidth = (2 * OUTER_PADDING) + (titleWidth > defaultWidth ? titleWidth : defaultWidth);
const int pixelHeight = (2 * OUTER_PADDING) + (TITLE_FONT_SIZE + PADDING) + (NUM_ROWS * (ROW_HEIGHT + ROW_PADDING)) - ROW_PADDING;
const int nbytes = 4 * pixelHeight * pixelWidth;
byte* data = (byte*)malloc(sizeof(byte) * nbytes);
memset(data, 255, nbytes);
CGContextRef context = CGBitmapContextCreate(data, pixelWidth, pixelHeight, 8, 4 * pixelWidth, CGColorSpaceCreateDeviceRGB(), kCGBitmapByteOrderDefault | kCGImageAlphaNoneSkipLast);
// --------------------------------------------------
// create a view heirarchy and then draw to our context
UIView* mainView = [[UIView alloc] init];
[mainView addSubview:titleLabel];
// setup all the views
int keyIndex = 0;
CGFloat x = OUTER_PADDING;
CGFloat starty = titleLabel.frame.origin.y + titleLabel.frame.size.height + PADDING;
for (int col = 0; col < NUM_COLS; col++)
{
int nrows = (col == NUM_COLS + 1 ? NUM_OVERFLOW_ROWS : NUM_ROWS);
CGFloat y = starty;
for (int row = 0; (row < nrows) && (keyIndex < [items count]); row++)
{
CGFloat tempx = x;
MyItem* item = [items objectAtIndex:keyIndex];
UIImageView* imageview1 = [[UIImageView alloc] initWithImage:item.image1];
imageview1.frame = CGRectMake(tempx, y, BOX_SIZE, BOX_SIZE);
[mainView addSubview:imageview1];
tempx += BOX_SIZE + PADDING;
UIImageView* imageview2 = [[UIImageView alloc] initWithImage:item.imageview2];
imageview2.frame = CGRectMake(tempx, y, BOX_SIZE, BOX_SIZE);
[mainView addSubview:imageview2];
tempx += BOX_SIZE + PADDING;
UILabel* label = [[UILabel alloc] init];
label.font = [UIFont systemFontOfSize:ITEM_FONT_SIZE];
label.text = item.name;
[label sizeToFit];
label.center = CGPointMake(tempx + (label.frame.size.width / 2), imageview2.center.y);
[mainView addSubview:label];
y += ROW_HEIGHT + ROW_PADDING;
keyIndex++;
}
x += COL_WIDTH + COL_PADDING;
}
// --------------------------------------------------
// draw everything to actually generate the image
CGContextConcatCTM(context, CGAffineTransformMake(1, 0, 0, -1, 0, pixelHeight));
[mainView.layer renderInContext:context];
CGImageRef cgimage = CGBitmapContextCreateImage(context);
myCoolImage = [UIImage imageWithCGImage:cgimage];
CGImageRelease(cgimage);
CGContextRelease(context);
free(data);

As we've established in comments, what you're doing is both illegitimate and slow.
Arranging and sizing UILabel and UIImageView objects is slow, and calling
CALayer renderInContext is really slow.
And it isn't how you draw.
Everything you're doing has its analogue in the actual drawing world (Quartz 2D), and if you did it that way, not only would it be legal in the background, it probably wouldn't even need to be in the background because it would be so much faster. So:
Every place you use a UILabel, you can achieve exactly the same effect by using NSAttributedString draw... commands.
Every place you use a UIImageView, you can achieve exactly the same effect by using UIImage draw... commands.
Any of us who does any extensive drawing has learned to create structured layouts of the type you're making by using actual drawing code, and now is your chance to learn to do it too.

Related

How is transparency achieved in cocoa applications

I am trying to understand how is transparency actually implemented in cocoa applications. I was expecting the standard blending equation to be used i.e.
BlendedColour = alpha * layerColour + (1-alpha)*backgroundColour
However, I noticed that there is the slight difference in the blended colour expected if the above equation is used. To verify it, I did a small experiment as follows:
1.) Created a window, added a transparency of 0.8 to the window and grabbed a screenshot.
2.) I took a screenshot of the part of the screen where I am overlaying the window in step one without the window and overlayed the same image as in step 1, using the equation mentioned above. (I used openCV for that).
There is a slight difference in the colours for the two images, if you look closely. I wanted to understand what is causing the difference.
Resources:
1.) Images from Step 1 and Step2 respectively
2.) Code used in step 1
NSRect windowRect = {0,0,200,200};
m_NSWindow = [[NSWindow alloc] initWithContentRect:windowRect styleMask:NSBorderlessWindowMask backing:NSBackingStoreBuffered defer:NO];
[m_NSWindow setTitle:#"overlayWindow"];
[m_NSWindow makeKeyAndOrderFront:nil];
g_imageView = [[NSImageView alloc] initWithFrame:NSMakeRect(0,0,200,200)];
[m_NSWindow.contentView addSubview:g_imageView];
[m_NSWindow setOpaque:NO];
[m_NSWindow setAlphaValue:0.8];
NSBitmapImageRep* imageRep = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:nil
pixelsWide:200
pixelsHigh:200
bitsPerSample:8
samplesPerPixel:4
hasAlpha:YES
isPlanar:NO
colorSpaceName:NSDeviceRGBColorSpace
bitmapFormat:NSAlphaNonpremultipliedBitmapFormat
bytesPerRow:(200*4)
bitsPerPixel:32];
memcpy(imageRep.bitmapData,m_paintBuffer.data,160000);
NSSize imageSize = NSMakeSize(200,200);
NSImage* myImage = [[NSImage alloc] initWithSize: imageSize];
[myImage addRepresentation:imageRep];
[g_imageView setImage:myImage];
4.) Code for step 2
void overlayImage(const cv::Mat &background, const cv::Mat &foreground,
cv::Mat &output, cv::Point2i location)
{
background.copyTo(output);
// start at the row indicated by location, or at row 0 if location.y is negative.
for (int y = max(location.y, 0); y < background.rows; ++y)
{
int fY = y - location.y; // because of the translation
// we are done of we have processed all rows of the foreground image.
if (fY >= foreground.rows)
break;
// start at the column indicated by location,
// or at column 0 if location.x is negative.
for (int x = max(location.x, 0); x < background.cols; ++x)
{
int fX = x - location.x; // because of the translation.
// we are done with this row if the column is outside of the foreground image.
if (fX >= foreground.cols)
break;
// determine the opacity of the foregrond pixel, using its fourth (alpha) channel.
double opacity =
((double)foreground.data[fY * foreground.step + fX * foreground.channels() + 3])
/ 255.;
// and now combine the background and foreground pixel, using the opacity,
// but only if opacity > 0.
for (int c = 0; opacity > 0 && c < output.channels(); ++c)
{
unsigned char foregroundPx =
foreground.data[fY * foreground.step + fX * foreground.channels() + c];
unsigned char backgroundPx =
background.data[y * background.step + x * background.channels() + c];
output.data[y*output.step + output.channels()*x + c] =
backgroundPx * (1. - opacity) + foregroundPx * opacity;
}
}
}
}

Weird screen resizing bug when calling shadowCastBitMask=

I'm making a ball/paddle "breakout" style game for iOS8 where the blocks fall from the top of the screen.
I decided to try Apple's new SKLightNode in sprite-kit and it worked wonderfully, casting a light from the top of the screen:
in levelScene.h:
#import <SpriteKit/SpriteKit.h>
in levelScene.m:
-(id)initWithSize:(CGSize)size {
...
SKLightNode* light = [[SKLightNode alloc] init];
// light.enabled = YES;
light.categoryBitMask = lightCategory;
light.falloff = 1;
light.ambientColor = [UIColor whiteColor];
light.lightColor = [[UIColor alloc] initWithRed:0.0 green:1.0 blue:0.5 alpha:0.5];
light.shadowColor = [[UIColor alloc] initWithRed:0.0 green:0.0 blue:0.0 alpha:0.2];
light.position = CGPointMake(CGRectGetMidX(self.frame), self.frame.size.height - 20);
[self addChild:light];
...
}
and casting a shadow from the paddle near the bottom of the screen:
-(id)initWithSize:(CGSize)size {
...
self.paddle.shadowCastBitMask = lightCategory;
...
}
However, when I try to make my falling blocks cast shadows by defining the shadowCastBitMask of my rectangle (or block) spriteNodes, which are, unlike the paddle, added at various intervals throughout play, I experience a bizarre kind of clipping where the entire screen and all of its contents resize to around 60%-80% of the original size, squashing vertically slightly. It is only for the briefest of moments so I cannot gain a decent idea of what it is even doing to the image, let alone why. I have found nothing relating to this bug online.
I can say that it is reproduced everytime that a block enters from the top of the screen, even the first time, suggesting that it has nothing to do with multiple instances being called simultaneously. Since the paddle (and ball when tested) seems to cast a shadow without problems, I can only assume that it is either something to do with the fact that the call is made during gameplay, not before it has started as is the case with the paddle, or that there is something in my -addRectangle call that I'm missing.
So, here's -(void)addRectangle in its entirety, the shadowCastBitMask=... call is near the end:
- (void)addRectangle {
// Create sprite
self.rectangle = [SKSpriteNode spriteNodeWithImageNamed:#"Rectangle"];
// Determine where to spawn the rectangle along the X axis
int minX = (CGRectGetMinX(self.frame) + (self.rectangle.size.width/2)) ;
int maxX= (( CGRectGetMaxX(self.frame)) - (self.rectangle.size.width)) ;
int actualX = ( arc4random_uniform(maxX) +minX);
// Create the rectangle slightly off-screen
self.rectangle.position = CGPointMake(actualX, self.frame.size.height + self.rectangle.size.height/1);
self.rectangle.zPosition = 5;
// Determine speed of the rectangle
if(multiplier>=29 && multiplier<49){
int minDuration = 5.5;
int maxDuration = 7.0;
int rangeDuration = maxDuration - minDuration;
int actualDuration = (arc4random() % rangeDuration) + minDuration;
// Create the actions
SKAction * actionMove = [SKAction moveTo:CGPointMake(actualX, -self.rectangle.size.height/1) duration:actualDuration];
[self.rectangle runAction:actionMove];
}
if(multiplier>=49 && multiplier < 99){
int minDuration = 4.0;
int maxDuration = 5.0;
int rangeDuration = maxDuration - minDuration;
int actualDuration = (arc4random() % rangeDuration) + minDuration;
// Create the actions
SKAction * actionMove = [SKAction moveTo:CGPointMake(actualX, -self.rectangle.size.height/1) duration:actualDuration];
[self.rectangle runAction:actionMove];
}
if(multiplier>=99){
int minDuration = 3.0;
int maxDuration = 4.0;
int rangeDuration = maxDuration - minDuration;
int actualDuration = (arc4random() % rangeDuration) + minDuration;
// Create the actions
SKAction * actionMove = [SKAction moveTo:CGPointMake(actualX, -self.rectangle.size.height/1) duration:actualDuration];
[self.rectangle runAction:actionMove];
}
else if (multiplier<29){
int minDuration = 6.0;
int maxDuration = 10.0;
int rangeDuration = maxDuration - minDuration;
int actualDuration = (arc4random() % rangeDuration) + minDuration;
// Create the actions
SKAction * actionMove = [SKAction moveTo:CGPointMake(actualX, -self.rectangle.size.height/1) duration:actualDuration];
[self.rectangle runAction:actionMove];
}
self.rectangle.physicsBody = [SKPhysicsBody bodyWithRectangleOfSize:self.rectangle.size];
self.rectangle.physicsBody.dynamic = YES;
self.rectangle.physicsBody.restitution = 0.4f;
self.rectangle.physicsBody.density = 1000;
self.rectangle.physicsBody.categoryBitMask = blockCategory;
self.rectangle.physicsBody.contactTestBitMask = bottomCategory | paddleCategory | laserCategory;
self.rectangle.physicsBody.collisionBitMask = 0x0 ;
self.rectangle.physicsBody.affectedByGravity = NO;
//the offending line:
self.rectangle.shadowCastBitMask = lightCategory;
[self addChild:self.rectangle];
[_blocks addObject:self.rectangle];
_EyeLeft = [SKSpriteNode spriteNodeWithImageNamed:#"Eye"];
_EyeLeft.position = CGPointMake(_EyeLeft.parent.position.x-10, _EyeLeft.parent.position.y) ;
// _EyeLeft.zPosition = 7;
_EyeLeft.physicsBody.allowsRotation = YES;
_EyeLeft.name = #"Eye";
[self.rectangle addChild: _EyeLeft];
[_Eyes addObject:_EyeLeft];
_EyeRight = [SKSpriteNode spriteNodeWithImageNamed:#"Eye"];
_EyeRight.position = CGPointMake(_EyeRight.parent.position.x+10, _EyeRight.parent.position.y) ;
// _EyeRight.zPosition = 7;
_EyeRight.physicsBody.allowsRotation = YES;
_EyeRight.name = #"Eye";
// _EyeLeft.physicsBody.angularDamping = 0.2;
[self.rectangle addChild: _EyeRight];
[_Eyes addObject:_EyeRight];
}
The bug is not reproduced if I simply delete the shadowCastBitMask=... call, however then I get no shadows.
I also don't understand why the whole picture would resize, as I am not, as far as I'm aware, calling any commands related to the scale or scene at the time, certainly not triggered by such a call.
Any help at all would be greatly appreciated,
Thanks in advance for your time and any help offered.
I had this problem, I noticed it happened when using a tilemap, so I just used a static big image for the bG and seems to have solved the problem

How to iterate through all pixels of an UIImage?

Hey Guys i am currently trying to iterate through all pixels of an UIImage but the way i implemented it it takes sooo much time. So i thought it is the wrong way i implemented it.
This is my method how i get the RGBA Values of an Pixel :
+(NSArray*)getRGBAsFromImage:(UIImage*)image atX:(int)xx andY:(int)yy count:(int)count
{
// Initializing the result array
NSMutableArray *result = [NSMutableArray arrayWithCapacity:count];
// First get the image into your data buffer
CGImageRef imageRef = [image CGImage]; // creating an Instance of
NSUInteger width = CGImageGetWidth(imageRef); // Get width of our Image
NSUInteger height = CGImageGetHeight(imageRef); // Get height of our Image
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); // creating our colour Space
// Getting that raw Data out of an image
unsigned char *rawData = (unsigned char*) calloc(height * width * 4, sizeof(unsigned char));
NSUInteger bytesPerPixel = 4; // Bytes per pixel defined
NSUInteger bytesPerRow = bytesPerPixel * width; // Bytes per row
NSUInteger bitsPerComponent = 8; // Bytes per component
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace); // releasing the color space
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
// Now your rawData contains the image data in the RGBA8888 pixel format.
int byteIndex = (bytesPerRow * yy) + xx * bytesPerPixel;
for (int ii = 0 ; ii < count ; ++ii)
{
CGFloat red = (rawData[byteIndex] * 1.0) / 255.0;
CGFloat green = (rawData[byteIndex + 1] * 1.0) / 255.0;
CGFloat blue = (rawData[byteIndex + 2] * 1.0) / 255.0;
CGFloat alpha = (rawData[byteIndex + 3] * 1.0) / 255.0;
byteIndex += 4;
UIColor *acolor = [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
[result addObject:acolor];
}
free(rawData);
return result;
}
And this is the code how i parse through all the pixels :
for (NSUInteger y = 0 ; y < self.originalPictureWidth; y++) {
for (NSUInteger x = 0 ; x < self.originalPictureHeight; x++) {
NSArray * originalRGBA = [ComputerVisionHelperClass getRGBAsFromImage:self.originalPicture atX:(int)x andY:(int)y count:1];
NSArray * referenceRGBA = [ComputerVisionHelperClass getRGBAsFromImage:self.referencePicture atX:(int)referenceIndexX andY:(int)referenceIndexY count:1];
// Do something else ....
}
}
Is there a faster way of getting all RGBA values of an uiimage instance ?
For every pixel, you're generating a new copy of the image and then throwing it away. Yes, it would be much faster by just getting the data once and then processing on that byte array.
But it heavily depends on what is in "Do something else." There are many CoreImage and vImage functions that can do image processing very quickly, but you may need to approach the problem differently. It depends on what you're doing.

Export Opengl ES video

XCode has the ability to capture Opengl ES frames from the iPad, and that's great! I would like to extend this functionality and capture an entire Opengl ES movie of my application. Is there a way for that?
if it's not possible using XCode, how can i do it without much effort and big changes on my code? thank you very much!
I use a very simple technique, which requires just a few lines of code.
You can capture each OGL frame into UIImage using this code:
- (UIImage*)captureScreen {
NSInteger dataLength = framebufferWidth * framebufferHeight * 4;
// Allocate array.
GLuint *buffer = (GLuint *) malloc(dataLength);
GLuint *resultsBuffer = (GLuint *)malloc(dataLength);
// Read data
glReadPixels(0, 0, framebufferWidth, framebufferHeight, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// Flip vertical
for(int y = 0; y < framebufferHeight; y++) {
for(int x = 0; x < framebufferWidth; x++) {
resultsBuffer[x + y * framebufferWidth] = buffer[x + (framebufferHeight - 1 - y) * framebufferWidth];
}
}
free(buffer);
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, resultsBuffer, dataLength, releaseScreenshotData);
// prep the ingredients
const int bitsPerComponent = 8;
const int bitsPerPixel = 4 * bitsPerComponent;
const int bytesPerRow = 4 * framebufferWidth;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(framebufferWidth, framebufferHeight, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
CGColorSpaceRelease(colorSpaceRef);
CGDataProviderRelease(provider);
// then make the UIImage from that
UIImage *image = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return image;
}
Then you will capture each frame in your main loop:
- (void)onTimer {
// Compute and render new frame
[self update];
// Recording
if (recordingMode == RecordingModeMovie) {
recordingFrameNum++;
// Save frame
UIImage *image = [self captureScreen];
NSString *fileName = [NSString stringWithFormat:#"%d.jpg", (int)recordingFrameNum];
[UIImageJPEGRepresentation(image, 1.0) writeToFile:[basePath stringByAppendingPathComponent:fileName] atomically:NO];
}
}
At the end you will have tons of JPEG files which can be easily converted into a movie by
Time Lapse Assembler
If you want to have nice 30FPS movie, hard fix your calc steps to 1 / 30.0 sec per frame.

How to get frame for NSStatusItem

Is it possible to get the frame of a NSStatusItem after I've added it to the status bar in Cocoa? When my app is launched, I am adding an item to the system status bar, and would like to know where it was positioned, is possible.
The following seems to work - I have seen similar solutions for iOS applications and supposedly they permit submission to the app store because you are still using standard SDK methods.
NSRect frame = [[statusBarItem valueForKey:#"window"] frame];
With 10.10, NSStatusItem has a button property that be used to get the status item position without setting a custom view.
NSStatusBarButton *statusBarButton = [myStatusItem button];
NSRect rectInWindow = [statusBarButton convertRect:[statusBarButton bounds] toView:nil];
NSRect screenRect = [[statusBarButton window] convertRectToScreen:rectInWindow];
NSLog(#"%#", NSStringFromRect(screenRect));
You can use statusItem.button.superview?.window?.frame in swift
If you have set a custom view on the status item:
NSRect statusRect = [[statusItem view] frame];
NSLog(#"%#", [NSString stringWithFormat:#"%.1fx%.1f",statusRect.size.width, statusRect.size.height]);
Otherwise I don't think it's possible using the available and documented APIs.
Edit: Incorporated comments.
It's possible to do this without any private API. Here's a category for NSScreen. This uses image analysis to locate the status item's image on the menu bar. Fortunately, computers are really fast. :)
As long as you know what the status item's image looks like, and can pass it in as an NSImage, this method should find it.
Works for dark mode as well as regular mode. Note that the image you pass in must be black. Colored images will probably not work so well.
#implementation NSScreen (LTStatusItemLocator)
// Find the location of IMG on the screen's status bar.
// If the image is not found, returns NSZeroPoint
- (NSPoint)originOfStatusItemWithImage:(NSImage *)IMG
{
CGColorSpaceRef csK = CGColorSpaceCreateDeviceGray();
NSPoint ret = NSZeroPoint;
CGDirectDisplayID screenID = 0;
CGImageRef displayImg = NULL;
CGImageRef compareImg = NULL;
CGRect screenRect = CGRectZero;
CGRect barRect = CGRectZero;
uint8_t *bm_bar = NULL;
uint8_t *bm_bar_ptr;
uint8_t *bm_compare = NULL;
uint8_t *bm_compare_ptr;
size_t bm_compare_w, bm_compare_h;
BOOL inverted = NO;
int numberOfScanLines = 0;
CGFloat *meanValues = NULL;
int presumptiveMatchIdx = -1;
CGFloat presumptiveMatchMeanVal = 999;
// If the computer is set to Dark Mode, set the "inverted" flag
NSDictionary *globalPrefs = [[NSUserDefaults standardUserDefaults] persistentDomainForName:NSGlobalDomain];
id style = globalPrefs[#"AppleInterfaceStyle"];
if ([style isKindOfClass:[NSString class]]) {
inverted = (NSOrderedSame == [style caseInsensitiveCompare:#"dark"]);
}
screenID = (CGDirectDisplayID)[self.deviceDescription[#"NSScreenNumber"] integerValue];
screenRect = CGDisplayBounds(screenID);
// Get the menubar rect
barRect = CGRectMake(0, 0, screenRect.size.width, 22);
displayImg = CGDisplayCreateImageForRect(screenID, barRect);
if (!displayImg) {
NSLog(#"Unable to create image from display");
CGColorSpaceRelease(csK);
return ret; // I would normally use goto(bail) here, but this is public code so let's not ruffle any feathers
}
size_t bar_w = CGImageGetWidth(displayImg);
size_t bar_h = CGImageGetHeight(displayImg);
// Determine scale factor based on the CGImageRef we got back from the display
CGFloat scaleFactor = (CGFloat)bar_h / (CGFloat)22;
// Greyscale bitmap for menu bar
bm_bar = malloc(1 * bar_w * bar_h);
{
CGContextRef bmCxt = NULL;
bmCxt = CGBitmapContextCreate(bm_bar, bar_w, bar_h, 8, 1 * bar_w, csK, kCGBitmapAlphaInfoMask&kCGImageAlphaNone);
// Draw the menu bar in grey
CGContextDrawImage(bmCxt, CGRectMake(0, 0, bar_w, bar_h), displayImg);
uint8_t minVal = 0xff;
uint8_t maxVal = 0x00;
// Walk the bitmap
uint64_t running = 0;
for (int yi = bar_h / 2; yi == bar_h / 2; yi++)
{
bm_bar_ptr = bm_bar + (bar_w * yi);
for (int xi = 0; xi < bar_w; xi++)
{
uint8_t v = *bm_bar_ptr++;
if (v < minVal) minVal = v;
if (v > maxVal) maxVal = v;
running += v;
}
}
running /= bar_w;
uint8_t threshold = minVal + ((maxVal - minVal) / 2);
//threshold = running;
// Walk the bitmap
bm_bar_ptr = bm_bar;
for (int yi = 0; yi < bar_h; yi++)
{
for (int xi = 0; xi < bar_w; xi++)
{
// Threshold all the pixels. Values > 50% go white, values <= 50% go black
// (opposite if Dark Mode)
// Could unroll this loop as an optimization, but probably not worthwhile
*bm_bar_ptr = (*bm_bar_ptr > threshold) ? (inverted?0x00:0xff) : (inverted?0xff:0x00);
bm_bar_ptr++;
}
}
CGImageRelease(displayImg);
displayImg = CGBitmapContextCreateImage(bmCxt);
CGContextRelease(bmCxt);
}
{
CGContextRef bmCxt = NULL;
CGImageRef img_cg = NULL;
bm_compare_w = scaleFactor * IMG.size.width;
bm_compare_h = scaleFactor * 22;
// Create out comparison bitmap - the image that was passed in
bmCxt = CGBitmapContextCreate(NULL, bm_compare_w, bm_compare_h, 8, 1 * bm_compare_w, csK, kCGBitmapAlphaInfoMask&kCGImageAlphaNone);
CGContextSetBlendMode(bmCxt, kCGBlendModeNormal);
NSRect imgRect_og = NSMakeRect(0,0,IMG.size.width,IMG.size.height);
NSRect imgRect = imgRect_og;
img_cg = [IMG CGImageForProposedRect:&imgRect context:nil hints:nil];
CGContextClearRect(bmCxt, imgRect);
CGContextSetFillColorWithColor(bmCxt, [NSColor whiteColor].CGColor);
CGContextFillRect(bmCxt, CGRectMake(0,0,9999,9999));
CGContextScaleCTM(bmCxt, scaleFactor, scaleFactor);
CGContextTranslateCTM(bmCxt, 0, (22. - IMG.size.height) / 2.);
// Draw the image in grey
CGContextSetFillColorWithColor(bmCxt, [NSColor blackColor].CGColor);
CGContextDrawImage(bmCxt, imgRect, img_cg);
compareImg = CGBitmapContextCreateImage(bmCxt);
CGContextRelease(bmCxt);
}
{
// We start at the right of the menu bar, and scan left until we find a good match
int numberOfScanLines = barRect.size.width - IMG.size.width;
bm_compare = malloc(1 * bm_compare_w * bm_compare_h);
// We use the meanValues buffer to keep track of how well the image matched for each point in the scan
meanValues = calloc(sizeof(CGFloat), numberOfScanLines);
// Walk the menubar image from right to left, pixel by pixel
for (int scanx = 0; scanx < numberOfScanLines; scanx++)
{
// Optimization, if we recently found a really good match, bail on the loop and return it
if ((presumptiveMatchIdx >= 0) && (scanx > (presumptiveMatchIdx + 5))) {
break;
}
CGFloat xOffset = numberOfScanLines - scanx;
CGRect displayRect = CGRectMake(xOffset * scaleFactor, 0, IMG.size.width * scaleFactor, 22. * scaleFactor);
CGImageRef displayCrop = CGImageCreateWithImageInRect(displayImg, displayRect);
CGContextRef compareCxt = CGBitmapContextCreate(bm_compare, bm_compare_w, bm_compare_h, 8, 1 * bm_compare_w, csK, kCGBitmapAlphaInfoMask&kCGImageAlphaNone);
CGContextSetBlendMode(compareCxt, kCGBlendModeCopy);
// Draw the image from our menubar
CGContextDrawImage(compareCxt, CGRectMake(0,0,IMG.size.width * scaleFactor, 22. * scaleFactor), displayCrop);
// Blend mode difference is like an XOR
CGContextSetBlendMode(compareCxt, kCGBlendModeDifference);
// Draw the test image. Because of blend mode, if we end up with a black image we matched perfectly
CGContextDrawImage(compareCxt, CGRectMake(0,0,IMG.size.width * scaleFactor, 22. * scaleFactor), compareImg);
CGContextFlush(compareCxt);
// Walk through the result image, to determine overall blackness
bm_compare_ptr = bm_compare;
for (int i = 0; i < bm_compare_w * bm_compare_h; i++)
{
meanValues[scanx] += (CGFloat)(*bm_compare_ptr);
bm_compare_ptr++;
}
meanValues[scanx] /= (255. * (CGFloat)(bm_compare_w * bm_compare_h));
// If the image is very dark, it matched well. If the average pixel value is < 0.07, we consider this
// a presumptive match. Mark it as such, but continue looking to see if there's an even better match.
if (meanValues[scanx] < 0.07) {
if (meanValues[scanx] < presumptiveMatchMeanVal) {
presumptiveMatchMeanVal = meanValues[scanx];
presumptiveMatchIdx = scanx;
}
}
CGImageRelease(displayCrop);
CGContextRelease(compareCxt);
}
}
// After we're done scanning the whole menubar (or we bailed because we found a good match),
// return the origin point.
// If we didn't match well enough, return NSZeroPoint
if (presumptiveMatchIdx >= 0) {
ret = CGPointMake(CGRectGetMaxX(self.frame), CGRectGetMaxY(self.frame));
ret.x -= (IMG.size.width + presumptiveMatchIdx);
ret.y -= 22;
}
CGImageRelease(displayImg);
CGImageRelease(compareImg);
CGColorSpaceRelease(csK);
if (bm_bar) free(bm_bar);
if (bm_compare) free(bm_compare);
if (meanValues) free(meanValues);
return ret;
}
#end
you can hack the window ivar like this :
#interface NSStatusItem (Hack)
- (NSRect)hackFrame;
#end
#implementation NSStatusItem (Hack)
- (NSRect)hackFrame
{
int objSize = class_getInstanceSize( [NSObject class] ) ;
id * _ffWindow = (void *)self + objSize + sizeof(NSStatusBar*) + sizeof(CGFloat) ;
NSWindow * window = *_ffWindow ;
return [window frame] ;
}
#end
This is useful for status items without a custom view.
Tested on Lion

Resources