Multiplying colors when compositing with Quartz - cocoa

I'm trying to draw an image with a certain color. To get this effect, first I draw the image (consisting of mostly white and black), and then I draw a rectangle over it in a specified color (let's say red). There are various compositing options available, but none are exactly what I want.
I want the colors (and alpha values) to be multiplied together. I think this is sometimes called a "Modulate" operation in other graphics systems. So, the white values in the image will be multiplied by the red values in the overlay rectangle to produce red. The black values in the image will be multiplied with black to product black.
In other words, R = S * D (result equals source multiplied by destination).
This is the code I'm working with now:
[image drawInRect:[self bounds] fromRect:NSZeroRect operation:NSCompositeSourceOver fraction:1];
NSColor * blend = [NSColor colorWithCalibratedRed:1 green:0 blue:0 alpha:1.0];
[blend setFill];
NSRectFillUsingOperation(self.bounds, NSCompositePlusDarker);
NSCompositePlusDarker is NOT what I want, but it's close. (It's addition instead of multiplication).
Is there a way to do this in Quartz? It's very easy to do in OpenGL for instance. I've looked at CIFilter's but they seem a bit cumbersome for this. But if that's the only way, I would appreciate any sample code to get me in the right direction.

Yes.
Note that AppKit compositing operations and Quartz blend modes, though there are quite a few in common, are not interchangeable. kCGBlendModeMultiply has the same numeric value as NSCompositeCopy, so the latter—flat overwriting—is what will happen if you try to pass it to any of AppKit's drawing APIs.
So, you'll need to use Quartz for every part of the fill. You can use NSGraphicsContext to get the CGContext you'll need, and you can ask your NSColor for its CGColor.

Related

Obtaining RGB value of a pixel or average RGB value of pixels in UIImageView behind a UILabel in Swift

I have a UILabel that is placed on top of a UIImageView. The UIImageView can change displaying a variety of images. Depending on the image displayed in the UIImageView, the UILabel's text color must be able to change dynamically. For instance, when a light colored image is displayed, a dark UILabel text color will be used, or when a dark image is used, a light text color will be used.
In Swift, what is the best, simple, most efficient method to extract the RGB value of a single pixel or average RGB value of a group of pixels directly behind the position of UILabel that is sitting above a UIImage?
Or even better, in Swift, is there a UILabel method that changes the text colour dynamically based on the background it is positioned above?
Thank you.
Honestly, I would not even grab RGB, you should know what image you are putting into UIImageView, so plan the label based on that.
If you must choose RGB, then do so,thing like this:
UIGraphicsBeginImageContext(CGSizeMake(1,1));
let context = UIGraphicsGetCurrentContext()
//We translate in the opposite direction so that when we draw to the canvas, it will draw our point at the first spot in readable memory
CGContextTranslateCTM(context, -x, -y);
// Make the CALayer to draw in our "canvas".
self.layer.renderInContext(context!);
let colorpoint = UnsafeMutablePointer<UInt32>(CGBitmapContextGetData(context));
UIGraphicsEndImageContext();
Where x and y is the point you want to grab, and self is your UIImageView.
Edit: #Mr.T posted a link to how this is done as well, if you need the average, just grab the amount of pixels needed by changing UIGraphicsBeginImageContext(CGSizeMake(1,1)); to UIGraphicsBeginImageContext(CGSizeMake(width,height)); and compute the average with that data

Blending over NSVisualEffectView

I want to draw image with HardLight composite operation. I've created NSImageView with next draw code:
- (void)drawRect:(NSRect)dirtyRect {
//[super drawRect:dirtyRect];
if (self.image != NULL) {
[self.image drawInRect:self.bounds fromRect:NSZeroRect operation:NSCompositeHardLight fraction:1.0];
}
}
In usual case it works well. But it does not work over NSVisualEffectView.
How can I blend HardLight over NSVisualEffectView?
in the image linked below you can see rounded rectangle which blend HardLight over window background and colour image. But over NSVisualEffectView (red bottom rectangle) it draws just grey.
https://www.dropbox.com/s/bcpe6vdha6xfc5t/Screenshot%202015-03-27%2000.32.53.png?dl=0
Roughly speaking, image compositing takes none, one or both pixels from the source and destination, applies some composite operation and writes it to the destination. To get any effect that takes into account the destination pixel, that pixel’s color information must be known when the compositing operation takes place, which is in your implementation of -drawRect:.
I’m assuming you’re talking about behind window blending (NSVisualEffectBlendingModeBehindWindow) here. The problem with NSVisualEffectView is that it does not draw anything. Instead, it defines a region that tells the WindowServer process to do its vibrancy stuff in that region. This happens after your app draws its views.
Therefore a compositing operation in your app cannot take into account the pixels that the window server draws later. In short, this cannot be done.

Why doesn't CGContextShowGlyphsAtPositions() work, when CGContextShowGlyphsAtPoint() does work?

I have written a simple Cocoa app for Mac OS X (10.7) using Xcode 4.2. All the app does is create a window with a scrollable array of sub-Views in it, each representing a page to draw stuff on at a very low level. The sub-View's isFlipped method delivers YES, so the origin of every sub-View is the upper left corner. Using various Core Graphics routines, I'm able to draw lines and fill paths and all that fun PostScripty stuff successfully.
It's drawing glyphs from a given font that's got me confused.
Here's the complete code, cut-n-pasted from the program, for the sub-View's -drawRect: method --
- (void)drawRect:(NSRect)dirtyRect
{
// Start with background color for any part of this view
[[NSColor whiteColor] set];
NSRectFill( dirtyRect );
// Drop down to Core Graphics world, ensuring there's no side-effects
context = (CGContextRef) [[NSGraphicsContext currentContext] graphicsPort];
CGContextSaveGState(context);
{
//CGFontRef theFont = CGFontCreateWithFontName(CFSTR("American Typewriter"));
//CGContextSetFont(context, theFont);
CGContextSelectFont(context, "American Typewriter", 200, kCGEncodingMacRoman);
CGContextSetFontSize(context, 200);
// Adjust the text transform so the text doesn't draw upside down
CGContextSetTextMatrix(context, CGAffineTransformScale(CGAffineTransformIdentity, 1, -1));
CGContextSetTextDrawingMode(context, kCGTextFillStroke);
CGContextSetRGBFillColor(context, 0.0, .3, 0.8, 1.0);
// Find the center of view's (not dirtyRect's) bounds
// View is 612 x 792 (nominally 8.5" by 11")
CGPoint centerPoint;
CGRect bds = [self bounds];
centerPoint.x = bds.origin.x + bds.size.width / 2;
centerPoint.y = bds.origin.y + bds.size.height / 2;
// Create arrays to hold glyph IDs and the positions at which to draw them.
#define glyphCount 1 // For now, just one glyph
CGGlyph glyphs[glyphCount];
CGPoint positions[glyphCount];
glyphs[0] = 40; // Glyph ID for '#' character in above font
positions[0] = centerPoint;
// Draw above center. This works.
CGContextShowGlyphsAtPoint(context, centerPoint.x, centerPoint.y - 200.0, glyphs, glyphCount);
// Draw at center. This works.
CGContextShowGlyphsAtPoint(context, positions[0].x, positions[0].y, glyphs, glyphCount);
// Draw below center. This fails (draws nothing). Why?
positions[0].y += 200.0;
CGContextShowGlyphsAtPositions(context, glyphs, positions, glyphCount);
}
CGContextRestoreGState(context);
}
What's got me pulling my hair out is that the first two glyph-drawing calls using CGContextShowGlyphsAtPoint() work fine as expected, but the third attempt using CGContextShowGlyphsAtPositions() never draws anything. So there are only two # symbols on the page, rather than three. This difference in behaviors doesn't depend on whether I've previously used CGContextSetFont() or CGContextSelectFont().
There must be some hidden change in state going on, or something very different under the hood w/r/t these two almost identical Core Graphics glyph-drawing routines, but all my experiments so far have not demonstrated what that might be.
Sigh. I just want to efficiently draw an array of glyphs at a corresponding array of positions in a view.
Any ideas what I'm getting wrong?
After much experimentation enabled by being whacked upside the head by Peter Hosey's response (even though some of it isn't quite right, many thanks!), here's the source of my confusion and an explanation I'm pretty sure is correct (well, the code is doing what I expect it to, anyway).
In the usual higher-level PostScript path/drawing model, drawing a character updates the current point (path end) to the position where a next character might appear, leaving the current user-space transform the same. But under the hood, the text matrix transform is translated by the glyph's width (or more accurately by an advance vector) so that the next character to be drawn can start at, or with respect to, a new text origin. The text matrix's scale factors remain unchanged after translation.
So the initial setup call to CGContextSetTextMatrix() to flip the vertical sense of the text matrix is still necessary (if user-space is similarly flipped), because otherwise both glyph-collection drawing routines will draw the glyphs upside-down w/r/t path drawing, no matter where the text drawing starts or which drawing routine is used.
Neither of the two glyph collection drawing routines affects the current path. They are lower-level than that. I found that I could intersperse either routine among path construction calls without affecting a path's position or shape.
In the code posted above, the position data that CGContextShowGlyphsAtPositions() uses to draw the glyph collection are all relative to the user-space point corresponding to the current text matrix's origin, which was translated to the right of the previously drawn '#' glyph. Because I was using such a large font size, position[0] was causing the next '#' glyph to be drawn outside the view's bounds, so it wasn't visible, but it was being drawn.
But there's still some nuances among the two routines. CGContextShowGlyphsAtPositions() can never be used to place glyphs at any absolute user-space position. So how do you tell it where to start? The answer (or at least one answer) is that CGContextShowGlyphsAtPoint() updates the origin of the text matrix to the given user-space point even if there are no glyphs to draw. And CGContextShowGlyphsAtPoint() must translate the text matrix after each glyph it draws, because what would be the point (so to speak) of drawing the entire glyph collection on top of one another.
So one can "move" to a non-path point in user-space using CGContextShowGlyphsAtPoint() with a glyph count of 0, and then one can call CGContextShowGlyphsAtPositions() (any number of times) with a vector of positions each of which will be treated relative to the text matrix's origin (or really, the user-space point corresponding to it) without the text matrix origin being updated at all when CGContextShowGlyphsAtPositions() returns.
Finally, note that the position data provided to CGContextShowGlyphsAtPositions() is in user-space coordinates. A comment in Apple's header file for these routines expressly says so.
One possibility is this, from the CGContextShowGlyphsAtPositions document:
The position of each glyph is specified in text space, and, as a consequence, is transformed through the text matrix to user space.
The text matrix is a separate property of the context, distinct from the graphics state's current transformation matrix.
It doesn't say that about CGContextShowGlyphsAtPoint:
This function displays an array of glyphs at the specified position in the user space.
(Emphasis added to both quotes.)
So, your text matrix is not actually used when you show glyphs from a single point.
But then, when you show glyphs at an array of positions, it is used, and you see the symptom of a wrong matrix. Specifically, your matrix to try to flip the text back the other way is wrong: it flips the coordinate system upside down. You are drawing outside of the view.
(Try setting it to scale by 0.5 instead of -1 and you'll see what I mean.)
My recommendation is to take out your CGContextSetTextMatrix call.

NSDrawNinePartImage gaps

I am working on drawing custom buttons/textfields with NSDrawNinePartImage. I slice up an image into nine parts in code and draw it into a rect with NSDrawNinePartImage.
Unfortunately I am getting some gaps in the drawing pattern. I thought it had something to do with my slicing code, but I saved them out as images from where I slice them, and they all look good (I even put them together and they looked good). Some of the cases where I use it to work just fine though despite some using the same images.
I am pretty confident that it comes down to the actual drawing.
Do you know of any NSGraphicsContext or other settings that would affect it or something else that may be causing this?
With gaps
Without gaps
I fixed a similar issue by disable the anti-alias option
NSGraphicsContext* theContext = [NSGraphicsContext currentContext];
[theContext saveGraphicsState];
[theContext setShouldAntialias: NO];
NSDrawNinePartImage(...);
[theContext restoreGraphicsState];
Is it possible that your upper left and lower right images are a pixel too wide?
This function uses the top-left and bottom-right corner images to
determine the widths and heights of the edge areas that need to be
filled. If the width or height of the bottom-left and top-right images
are not sized appropriately, they may be scaled to fill their corner
area. Edge areas between the corners are tiled using the corresponding
image. Similarly, the center area is tiled using the specified center
image.
Use even sizes in images. I did this, and it solved this problem for me.

How to get pixel perfect drawing with cocoa

I'm trying to draw some of my UI elements in Cocoa, mainly icons for buttons, but I'm having great difficulty getting the kind of precision I'd like.
I'm using super simple code like this to draw rectangles:
[[NSColor redColor] set];
[NSBezierPath strokeRect:myRect];
But what I'm seeing is the red rectangle line is always faded.
What am I missing here?
The Cocoa coordinates actually specify the center of the pixel you want to draw. This means, for example, if you want to draw on the bottom left pixel, you should use the coordinates (0.5,0.5).
Add/Subtract half a pixel to your coordinates and they should be pixel-perfect.
You can disable antialiasing for your graphics context like this:
[[NSGraphicsContext currentContext] setShouldAntialias:NO];
Or you could use NSFrameRect instead of a bezier path, and that would get the precision you want, while keeping antialiasing on:
NSFrameRect(myRect);

Resources