NSReadPixel() always returns nil - macos

I want to create a color picker. So I thought using NSReadPixel would be a good approach to receive the pixels’ color. So what I basically did was this:
class CustomWindowController: NSWindowController {
override func mouseMoved(with event: NSEvent) {
let mouseLocation = NSEvent.mouseLocation()
let pickedColor = NSReadPixel(mouseLocation)
}
}
But pickedColor always returns nil. Even if I try to "readPixel" with a fixed point (for testing purposes) it still returns nil. What am I missing?
EDIT #1
I’ve followed the NSBitmapImageRep / colorAt approach from the answers and noticed that the resulting NSColor seems to be a bit different (in most cases brighter) that it should be (take a look at the screenshot). Do I have to consider colorSpaces or so? (and how?)
EDIT #2
Got it work - bitmap.colorSpaceName = NSDeviceRGBColorSpace does the trick.

It really doesn't work. I use this code for getting pixel color
NSPoint _point = [NSEvent mouseLocation];
CGFloat x = floor(_point.x);
CGFloat y = [NSScreen mainScreen].frame.size.height - floor(_point.y);
//it needs because AppKit and CoreGraphics use different coordinat systems
CGWindowID windowID = (CGWindowID)[self windowNumber];
CGImageRef pixel = CGWindowListCreateImage(CGRectMake(x, y, 1, 1), kCGWindowListOptionOnScreenBelowWindow, windowID, kCGWindowImageNominalResolution);
NSBitmapImageRep *bitmap = [[NSBitmapImageRep alloc] initWithCGImage:pixel];
CGImageRelease(pixel);
NSColor *color = [bitmap colorAtX:0 y:0];

Related

How to subclass NSTextAttachment?

Here is my problem:
I use Core Data to store rich text input from iOS and/or OS X apps and would like images pasted into the NSTextView or UITextView to:
a) retain their original resolution, and
b) on display to be scaled to fit the textView correctly, which means scaling based on the size of the view on the device.
Currently I am using - (void)textStorage:(NSTextStorage *)textStorage didProcessEditing:(NSTextStorageEditActions)editedMask range:(NSRange)editedRange changeInLength:(NSInteger)delta to look for attachments and to then generate an image with a scale factor and assigning it to the textAttachment.image attribute.
This kind of works because I just change the scale factor and the original image gets retained but I believe a more elegant solution would be to use a NSTextAttachmentContainer subclass and to return from this an appropriately sized CGREct with
- (CGRect)attachmentBoundsForTextContainer:(NSTextContainer *)textContainer proposedLineFragment:(CGRect)lineFrag glyphPosition:(CGPoint)position characterIndex:(NSUInteger)charIndex
So my question is how do I create and insert such a subclass ?
Do I use the textStorage:didProcessEditing to iterate over each attachment and replace its NSTextAttachmentContainer with a class of my own, or can I simply create a Category and then somehow use this category to change the default behaviour. The latter seems much less intrusive but how do I get my textViews to automatically use this Category?
Oops: Just noticed NSTextAttachmentContainer is a protocol so I assume then creating a Category on NSTextAttachment and overriding the method above is an option.
Mmm: can't use Category to override an existing class method so I guess subclassing is the only option in which case how do I get the UITextView to use my attachment subclass, or do I have to iterate over the attributedString to replace all NSTextAttachments with instances of MYTextAttachment. And what will be the impact of unarchiving this string on OS X into say the default OS X NSTextAttachment (which is different from the iOS class) ?
Based on this excellent article, if you want to make use of
- (CGRect)attachmentBoundsForTextContainer:(NSTextContainer *)textContainer proposedLineFragment:(CGRect)lineFrag glyphPosition:(CGPoint)position characterIndex:(NSUInteger)charIndex
to scale an image text attachment, you have to create your own subclass of NSTextAttachment
#interface MYTextAttachment : NSTextAttachment
#end
with the scale operation in the implementation:
#implementation MYTextAttachment
- (CGRect)attachmentBoundsForTextContainer:(NSTextContainer *)textContainer proposedLineFragment:(CGRect)lineFrag glyphPosition:(CGPoint)position characterIndex:(NSUInteger)charIndex {
CGFloat width = lineFrag.size.width;
// Scale how you want
float scalingFactor = 1.0;
CGSize imageSize = [self.image size];
if (width < imageSize.width)
scalingFactor = width / imageSize.width;
CGRect rect = CGRectMake(0, 0, imageSize.width * scalingFactor, imageSize.height * scalingFactor);
return rect;
}
#end
based on
lineFrag.size.width
which give you (or what I have understood as) the width taken by the textView on which you have (will) set the attributed text "embedding" your custom text attachment.
Once the subclass of NSTextAttachment created, all you have to do is make use of it. Create an instance of it, set an image, then create a new attributed string with it and append it to a NSMutableAttributedText per example:
MYTextAttachment* _textAttachment = [MYTextAttachment new];
_textAttachment.image = [UIImage ... ];
[_myMutableAttributedString appendAttributedString:[NSAttributedString attributedStringWithAttachment:_immediateTextAttachment]];
For info it seems that
- (CGRect)attachmentBoundsForTextContainer:(NSTextContainer *)textContainer proposedLineFragment:(CGRect)lineFrag glyphPosition:(CGPoint)position characterIndex:(NSUInteger)charIndex
is called whenever the textview is asked to be relayout-ed.
Hope it helps, even though it doesn't answer every aspect of your problem.
Swift 3 (based on #Bluezen's answer):
class MyTextAttachment : NSTextAttachment {
override func attachmentBounds(for textContainer: NSTextContainer?, proposedLineFragment lineFrag: CGRect, glyphPosition position: CGPoint, characterIndex charIndex: Int) -> CGRect {
guard let image = self.image else {
return CGRect.zero
}
let height = lineFrag.size.height
// Scale how you want
var scalingFactor = CGFloat(0.8)
let imageSize = image.size
if height < imageSize.height {
scalingFactor *= height / imageSize.height
}
let rect = CGRect(x: 0, y: 0, width: imageSize.width * scalingFactor, height: imageSize.height * scalingFactor)
return rect
}
}
Note that I am scaling based on height, as I was getting a lineFrag width value of 10000000 in my particular use case. Also note that I replaced scalingFactor = ... with scalingFactor *= ... so that I could use an additional, non-unity scaling factor (0.8 in this case).

OS X, Cocoa: How to highlight burned-out areas of a photo?

I've got an app that displays photos using NSImage – specifically, -[NSImage drawInRect:fromRect:operation:fraction:]. I want to highlight areas of the photo that are completely burned out (maximum values in all components, pure white) using a color like red, as some digital cameras and image processing apps do, to help the user see whether the image is overexposed, and how badly.
I've been scratching my head as to how to do this. Options I've considered:
I could probably write a Core Image filter to do it; none of the built-in filters look up to the task. That seems like overkill, though; I've been reading through the docs, and it looks fairly complicated. Big learning curve.
I could scan through the bitmap data for the image and modify it as necessary. This is easy enough to code for one bitmap format, but the multitude of bitmap formats make it a rather annoying exercise, and speed is important here, so writing general-purpose code that renders the image up to some maximal common format and works on that bitmap would be too big a speed penalty.
As it happens, I am already scanning through images (handling all the different bitmap formats) at an earlier point in the code, to generate histogram data for the images. I could pretty easily add code at that point that would remember the burned-out pixels for later use. I'm not quite sure what the best way is to do that, though. A 1-bit-per-pixel NSBitmapImageRep? How would I draw it later, making the 1-pixels draw red and the 0-pixels draw transparent, for example? I don't want to make a 32-bit NSBitmapImageRep with an alpha channel and everything just for this purpose, as memory is not infinite and images are large. But there must be a way to draw a 1-bit mask in a given color, somehow.
Before forging ahead with one of these approaches, I thought I'd see whether anybody here has a better idea. Or maybe has implemented the CI filter in question already? Apart from the learning curve, that seems like the best approach I've thought of so far – no memory overhead, and probably faster than other options, too.
Thanks...
Ben Haller
Stick Software
OK, I implemented my own Core Image filter to do this. Wasn't as hard as I expected, although the documentation is not great for this stuff. The doc examples all assume you're using ARC, so if you're not, following those examples will give you various retain/release bugs. There was also a little weirdness with the CIFilterConstructor stuff, which did not quite go as documented. But overall pretty easy. CI is cool. My code is below, for anybody who might find it useful:
Header:
#import
#interface SSTintHighlightsFilter : CIFilter
{
CIImage *inputImage;
CIColor *highlightColor;
}
#end
Implementation file:
#import "SSTintHighlightsFilter.h"
static CIKernel *tintHighlightsFilter = nil;
#implementation SSTintHighlightsFilter
+ (void)initialize
{
[CIFilter registerFilterName:#"SSTintHighlightsFilter" constructor:(id )self
classAttributes:[NSDictionary dictionaryWithObjectsAndKeys:#"Tint Highlights", kCIAttributeFilterDisplayName, [NSArray arrayWithObjects:kCICategoryColorAdjustment, kCICategoryStillImage, nil], kCIAttributeFilterCategories, nil]];
}
+ (CIFilter *)filterWithName:(NSString *)name
{
CIFilter *filter = [[self alloc] init];
return [filter autorelease];
}
- (id)init
{
if (!tintHighlightsFilter)
{
NSBundle *bundle = [NSBundle bundleForClass:[self class]];
NSString *code = [NSString stringWithContentsOfFile:[bundle pathForResource:#"tintHighlightsAndShadows" ofType:#"cikernel"] encoding:NSASCIIStringEncoding error:NULL];
NSArray *kernels = [CIKernel kernelsWithString:code];
tintHighlightsFilter = [[kernels objectAtIndex:0] retain];
}
return [super init];
}
- (NSDictionary *)customAttributes
{
NSDictionary *attrs = #{
#"highlightColor" : #{ kCIAttributeClass : [CIColor class], kCIAttributeType : kCIAttributeTypeOpaqueColor }
};
return attrs;
}
- (CIImage *)outputImage
{
CISampler *src = [CISampler samplerWithImage:inputImage];
return [self apply:tintHighlightsFilter
arguments:[NSArray arrayWithObjects:src, highlightColor, nil]
options:[NSDictionary dictionaryWithObjectsAndKeys:[src definition], kCIApplyOptionDefinition, nil]];
}
#end
tintHighlights.cikernel:
kernel vec4 tintHighlights(sampler inputImage, __color highlightColor)
{
vec4 originalColor, tintedColor;
float sum;
// fetch the source pixel
originalColor = sample(inputImage, samplerCoord(inputImage));
// calculate the color component sum as a way of testing whether we are black or white
sum = originalColor.r + originalColor.g + originalColor.b;
// replace pixels that are white with the highlight color
tintedColor = (sum > 2.99999999999999999999999) ? highlightColor : originalColor;
// preserve alpha
tintedColor.a = originalColor.a;
return tintedColor;
}
using the filter:
+ (NSImage *)showHighlightsInImage:(NSImage *)img dstRect:(NSRect)dstRect
{
NSGraphicsContext *currentContext = [NSGraphicsContext currentContext];
NSRect dstRectForCGImage = dstRect; // because the method below wants a pointer, and I don't trust it not to modify my rect...
CGImageRef cgImage = [img CGImageForProposedRect:&dstRectForCGImage context:currentContext hints:nil];
CIImage *inputImage = [[CIImage alloc] initWithCGImage:cgImage];
[SSTintHighlightsFilter class]; // get my filter initialized
CIFilter *highlightFilter = [CIFilter filterWithName:#"SSTintHighlightsFilter"];
[highlightFilter setValue:inputImage forKey:#"inputImage"];
[highlightFilter setValue:[CIColor colorWithRed:1.0 green:0.0 blue:0.0] forKey: #"highlightColor"];
[inputImage release];
CIImage *outputImage = [highlightFilter valueForKey:#"outputImage"];
NSImage *resultImage = [[NSImage alloc] initWithSize:[img size]];
[resultImage addRepresentation:[NSCIImageRep imageRepWithCIImage:outputImage]];
return [resultImage autorelease];
}
I'm not sure that I'm handling the alpha entirely robustly, with premultiplication issues and so forth, but apart from that possible glitch it is working great.

How to move to the right the label Core Plot?

I use the following method to display the labels for my plot:
-(CPTLayer *)dataLabelForPlot:(CPTPlot *)plot recordIndex:(NSUInteger)index{
...
CPTTextLayer *label=[[CPTTextLayer alloc] initWithText:stringValue style:textStyle];
}
which for every index should return the label
I know that it's possible to move label up or down using:
plot.labelOffset=10;
The question is: how can i move the label a bit to the right?
I tried to use
label.paddingLeft=50.0f;
but it doesn't work.
Adding padding as in your example does work, but maybe not in the way you expect. Scatter and bar plots will center the label above each data point (with a positive offset). The padding makes the whole label wider so when centered, the test appears off to the side. It's hard to control, especially if the label texts are different lengths.
There is an outstanding issue to address this (issue 266). No guarantees when it will be fixed, but it is something we're looking at.
I ran into the same problem and came up with a different solution.
What I decided to do was to create the label using the CPTAxisLabel method initWithContentLayer:
CPTTextLayer *textLayer = [[CPTTextLayer alloc] initWithText:labelStr style:axisTextStyle];
CGSize textSize = [textLayer sizeThatFits];
// Calculate the padding needed to center the label under the plot record.
textLayer.paddingLeft = barCenterLeftOffset - textSize.width/2.0;
CPTAxisLabel *label = [[CPTAxisLabel alloc] initWithContentLayer:textLayer];
Here barCenterLeftOffset is the offset of the center of the plot record.
I wrote an article about this:
http://finalize.com/2014/09/18/horizontal-label-positioning-in-core-plot-and-other-miscellaneous-topics/
A demo project I created that uses this solution can be found at:
https://github.com/scottcarter/algorithms
You can subclass CPTTextLayer and include an offset.
#interface WPTextLayer : CPTTextLayer
#property (nonatomic) CGPoint offset;
#end
#implementation WPTextLayer
-(void)setPosition:(CGPoint)position
{
CGPoint p = CGPointMake(position.x + self.offset.x, position.y + self.offset.y);
[super setPosition:p];
}
Then Use:
WPTextLayer *tLayer = [[WPTextLayer alloc] initWithText:#"blah" style:textStyle];
tLayer.offset = CGPointMake(3, -3);
return tLayer;
There may be consequences of this that I'm not aware of, but it seems to be working so far.

How to draw both stroked and filled text in drawLayer:inContext delegate

this is my drawLayer method in a CALayer's delegate.
it's only responsible for drawing a string with length = 1.
- (void)drawLayer:(CALayer *)layer inContext:(CGContextRef)ctx
{
CGRect boundingBox = CGContextGetClipBoundingBox(ctx);
NSAttributedString *string = [[NSAttributedString alloc] initWithString:self.letter attributes:[self attrs]];
CGContextSaveGState(ctx);
CGContextSetShadowWithColor(ctx, CGSizeZero, 3.0, CGColorCreateGenericRGB(1.0, 1.0, 0.922, 1.0));
CTLineRef line = CTLineCreateWithAttributedString((CFAttributedStringRef)string);
CGRect rect = CTLineGetImageBounds(line, ctx);
CGFloat xOffset = CGRectGetMidX(rect);
CGFloat yOffset = CGRectGetMidY(rect);
CGPoint pos = CGPointMake(CGRectGetMidX(boundingBox) - xOffset, CGRectGetMidY(boundingBox)- yOffset);
CGContextSetTextPosition(ctx, pos.x, pos.y);
CTLineDraw(line, ctx);
CGContextRestoreGState(ctx);
}
here's the attributes dictionary:
NSDictionary *attributes = [NSDictionary dictionaryWithObjectsAndKeys:
[NSFont fontWithName:#"GillSans-Bold" size:72.0], NSFontAttributeName,
[NSColor blackColor], NSForegroundColorAttributeName,
[NSNumber numberWithFloat:1.0], NSStrokeWidthAttributeName,
[NSColor blackColor], NSStrokeColorAttributeName,
style, NSParagraphStyleAttributeName, nil];
as is, the stroke does not draw, but the fill does.
if i comment out the stroke attributes in the dictionary, the fill draws.
i know this can't be right, but i can't find any reference to this problem.
is this a known issue when drawing text with a delegate ?
as the string is one character, i was following the doc example not using any framesetter machinery, but tried that anyway as a fix attempt without success.
in reading this question's answer, i realized that i needed to be using a negative number for the stroke value. i was thinking of the stroke being applied to the outside of the letter drawn by CTLineDraw, rather then inside the text shape.
i'm answering my own question, in case this should help anyone else with this misunderstanding, as i didn't see the referenced doc covering this.

How do I apply multiple color masks using CGImageCreateWithMaskingColors?

I'm a bit new to objective-c and even newer at programming with Quartz 2D, so apologies in advance! I've got a method where I would like to remove a handful of specific colors (not just one) from a UIImage.
When I run my project with just one color mask being applied, it works beautifully. Once I try stacking them, the 'whiteRef' comes out NULL. I've even tried modifying my method to take a color mask and then simply ran my method twice -feeding in the different colors masks- but still no go.
Any help with this is greatly appreciated!
- (UIImage *)doctorTheImage:(UIImage *)originalImage
{
const float brownsMask[6] = {124, 255, 68, 222, 0, 165};
const float whiteMask[6] = {255, 255, 255, 255, 255, 255};
UIImageView *imageView = [[UIImageView alloc] initWithImage:originalImage];
UIGraphicsBeginImageContext(originalImage.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGImageRef brownRef = CGImageCreateWithMaskingColors(imageView.image.CGImage, brownsMask);
CGImageRef whiteRef = CGImageCreateWithMaskingColors(brownRef, whiteMask);
CGContextDrawImage (context, CGRectMake(0, 0, imageView.image.size.width, imageView.image.size.height), whiteRef);
CGImageRelease(brownRef);
CGImageRelease(whiteRef);
UIImage *doctoredImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[imageView release];
return doctoredImage;
}
Well I found a good work around! Basically, I was ultimately using the image data over a MKMapView, so all I needed to do was break down the image to it's pixels and from there I could mess around as a I please. It may not be the best option in terms of speed, but does the trick. Here is a sample of the code I'm now using.
//Split the images into pixels to mess with the image pixel colors individually
size_t bufferLength = gridWidth * gridHeight * 4;
unsigned char *rawData = nil;
rawData = (unsigned char *)[self convertUIImageToBitmapRGBA8:myUIImage];
grid = malloc(sizeof(float)*(bufferLength/4));
NSString *hexColor = nil;
for (int i = 0 ; i < (bufferLength); i=i+4)
{
hexColor = [NSString stringWithFormat: #"%02x%02x%02x", (int)(rawData[i + 0]),(int)(rawData[i + 1]),(int)(rawData[i + 2])];
//mess with colors how you see fit - I just detected certain colors and slapped
//that into an array of floats which I later put over my mapview much like the
//hazardmap example from apple.
if ([hexColor isEqualToString:#"ff0299"]) //pink
value = (float)11;
if ([hexColor isEqualToString:#"9933cc"]) //purple
value = (float)12;
//etc...
grid[i/4] = value;
}
I also borrowed some methods (ie: convertUIImageToBitmapRGBA8) from here: https://gist.github.com/739132
Hope this helps someone out!

Resources