Drawing Text with Core Graphics - cocoa

I need to draw centered text to a CGContext.
I started with a Cocoa approach. I created a NSCell with the text and tried to draw it thus:
NSGraphicsContext* newCtx = [NSGraphicsContext
graphicsContextWithGraphicsPort:bitmapContext flipped:true];
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:newCtx];
[pCell setFont:font];
[pCell drawWithFrame:rect inView:nil];
[NSGraphicsContext restoreGraphicsState];
But the CGBitmapContext doesn't seem to have the text rendered on it. Possibly because I have to pass nil for the inView: parameter.
So I tried switching text rendering to Core Graphics:
The simplest way seems to be to use CGContextSelectFont to select a font using its postscript name, and point size, but CGContextShowTextAtPoint only takes non-unicode characters, and there is no apparent way to fit the text to a rectangle: or to compute the extents of a line of text to manually lay out the rectangle.
Then, there is a CGFont that can be created, and set cia CGContextSetFont. Drawing this text requires CGContextShowGlyphsAtPoint, but again the CGContext seems to be lacking functions to compute the bounding rect of generated text, or to wrap the text to a rect. Plus how to transform a string to an array of CGGlyphs is not obvious.
The next option is to try using CoreText to render the string. But the Core Text classes are hugely complicated, and while there are samples that show how to display text, in a specified font, in a rect, there are no samples demonstrating how to compute the bounding rect of a CoreText string.
So:
Given a CGFont a CGContext how do I compute the bounding rect of some text, and how do I transform a text string into an array of CGGlyphs?
Given a string, a CGContext and a postscript name and point size, what Core Text objects do I need to create to compute the bounding rect of the string, and/or draw the string wrapped to a rect on the CGContext.
Given a string and a NSFont - how do I render the string onto a CGBitmapContext? I already know how to get its extents.

I finally managed to find answers after 4 days of searching. I really wish Apple makes better documentation. So here we go
I assume you already have CGFontRef with you. If not let me know I will tell you how to load a ttf from resource bundle into CgFontRef.
Below is the code snippet to compute the bounds of any string with any CGFontref
int charCount = [string length];
CGGlyph glyphs[charCount];
CGRect rects[charCount];
CTFontGetGlyphsForCharacters(theCTFont, (const unichar*)[string cStringUsingEncoding:NSUnicodeStringEncoding], glyphs, charCount);
CTFontGetBoundingRectsForGlyphs(theCTFont, kCTFontDefaultOrientation, glyphs, rects, charCount);
int totalwidth = 0, maxheight = 0;
for (int i=0; i < charCount; i++)
{
totalwidth += rects[i].size.width;
maxheight = maxheight < rects[i].size.height ? rects[i].size.height : maxheight;
}
dim = CGSizeMake(totalwidth, maxheight);
Reuse the same function CTFontGetGlyphsForCharacters to get the glyphs. To get a CTFontRef from a CGFontRef use CTFontCreateWithGraphicsFont() function
Also remember NSFont and CGFontRef are toll-free bridged, meaning they can casted into each other and they will work seamlessly without any extra work.

I would continue with your above approach but use NSAttributedString instead.
NSGraphicsContext* newCtx = [NSGraphicsContext graphicsContextWithGraphicsPort:bitmapContext flipped:true];
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:newCtx];
NSAttributedString *string = /* make a string with all of the desired attributes */;
[string drawInRect:locationToDraw];
[NSGraphicsContext restoreGraphicsState];

Swift 5 version!
let targetSize: CGSize = // the space you have available for drawing the text
let origin: CGPoint = // where you want to position the top-left corner
let string: String = // your string
let font: UIFont = // your font
let attrs: [NSAttributedString.Key:Any] = [.font: font]
let boundingRect = string.boundingRect(with: targetSize, options: [.usesLineFragmentOrigin], attributes: attrs, context: nil)
let textRect = CGRect(origin: origin, size: boundingRect.size)
text.draw(with: textRect, options: [.usesLineFragmentOrigin], attributes: attrs, context: nil)

A complete sample code (Objective-C):
// Need to create pdf Graphics context for Drawing text
CGContextRef pdfContextRef = NULL;
CFURLRef writeFileUrl = (CFURLRef)[NSURL fileURLWithPath:writeFilePath];
if(writeFileUrl != NULL){
pdfContextRef = CGPDFContextCreateWithURL(writeFileUrl, NULL, NULL);
}
// Start page in PDF context
CGPDFContextBeginPage(pdfContextRef, NULL);
NSGraphicsContext* pdfGraphicsContext = [NSGraphicsContext graphicsContextWithCGContext:pdfContextRef flipped:false];
[NSGraphicsContext saveGraphicsState];
// Need to set the current graphics context
[NSGraphicsContext setCurrentContext:pdfGraphicsContext];
NSDictionary *attributes = [NSDictionary dictionaryWithObjectsAndKeys:[NSFont fontWithName:#"Helvetica" size:26], NSFontAttributeName,[NSColor blackColor], NSForegroundColorAttributeName, nil];
NSAttributedString * currentText=[[NSAttributedString alloc] initWithString:#"Write Something" attributes: attributes];
// [currentText drawInRect: CGRectMake(0, 300, 500, 100 )];
[currentText drawAtPoint:NSMakePoint(100, 100)];
[NSGraphicsContext restoreGraphicsState];
// Close all the created pdf Contexts
CGPDFContextEndPage(pdfContextRef);
CGPDFContextClose(pdfContextRef);

Related

CTRunDraw() does implement correct behaviour

I am implementing a custom text layout algorithm on MAC OS X using CoreText. I have to partially render a portion of CTRun in different locations inside a custom NSView subclass object.
Here is my implementation of drawRect: method
- (void)drawRect:(NSRect)dirtyRect {
// Drawing code here.
CGContextRef context =
(CGContextRef)[[NSGraphicsContext currentContext] graphicsPort];
CGContextSaveGState(context); {
[[NSColor whiteColor] set];
NSRectFill(dirtyRect);
CTFontRef font = CTFontCreateWithName(CFSTR("Menlo"), 20, &CGAffineTransformIdentity);
CFTypeRef values[] = {font};
CFStringRef keys[] = {kCTFontAttributeName};
CFDictionaryRef dictionary =
CFDictionaryCreate(NULL,
(const void **)&keys,
(const void **)&values,
sizeof(keys) / sizeof(keys[0]),
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
CFAttributedStringRef longString =
CFAttributedStringCreate(kCFAllocatorDefault, CFSTR("this_is_a_very_long_string_that_compromises_many_glyphs,we_wil_see_it:)"), dictionary);
CTLineRef lineRef = CTLineCreateWithAttributedString(longString);
CFArrayRef runsArray = CTLineGetGlyphRuns(lineRef);
CTRunRef run = (CTRunRef)CFArrayGetValueAtIndex(runsArray, 0);
CGAffineTransform textTransform = CGAffineTransformIdentity;
textTransform = CGAffineTransformScale(textTransform, 1.0, -1.0);
CGContextSetTextMatrix(context, textTransform);
CGAffineTransform sequenceTransform =
CGAffineTransformIdentity;
sequenceTransform = CGAffineTransformTranslate(sequenceTransform, 0, 23.2818);
CGPoint firstPoint = CGPointApplyAffineTransform(CGPointMake(0, 0), sequenceTransform);
CFRange firstRange = CFRangeMake(0, 24);
CGContextSetTextPosition(context, firstPoint.x, firstPoint.y);
CTRunDraw(run, context, firstRange);
CGPoint secondPoint = CGPointApplyAffineTransform(CGPointMake(0, 26.2812), sequenceTransform);
CFRange secondRange = CFRangeMake(24, 24);
CGContextSetTextPosition(context, secondPoint.x, secondPoint.y);
CTRunDraw(run, context, secondRange);
CGPoint thirdPoint = CGPointApplyAffineTransform(CGPointMake(0, 52.5625), sequenceTransform);
CFRange thirdRange = CFRangeMake(48, 23);
CGContextSetTextPosition(context, thirdPoint.x, thirdPoint.y);
CTRunDraw(run, context, thirdRange);
}
CGContextRestoreGState(context);
}
Here is the output of this code
https://docs.google.com/open?id=0B8df1OdxKw4FYkE5Z1d1VUZQYWs
The problem is CTRunDraw() method inserts blank spaces on the positions other than the range specified.
What i want is it should render the part of run at its correct position.
Here is the correct out put which i want.(The correct output is photoshop of original output).
https://docs.google.com/open?id=0B8df1OdxKw4FcFRnS0p1cFBfa28
Note: I am using flipped coordinate system in my custom NSView subclass.
- (BOOL)isFlipped {
return YES;
}
You're misusing CTRun here. A CTRun is a horizontal box containing laid-out glyphs. It doesn't really make sense to try to draw portions of it underneath one another (certainly such typesetting would be wrong in any even slightly complicated case). Why? Well, because there are situations where the glyphs that are chosen might differ if there was a line break at a given position (this can happen, for instance, with ligatures). Also, note that there is not necessarily a 1:1 mapping between characters and glyphs.
My guess is that you probably don’t need your own full custom typesetter (and trust me, writing one is complicated, so if you don’t need one, don’t write one). Instead, just use CTTypesetter and/or CTFramesetter to get the output you want.

How to draw both stroked and filled text in drawLayer:inContext delegate

this is my drawLayer method in a CALayer's delegate.
it's only responsible for drawing a string with length = 1.
- (void)drawLayer:(CALayer *)layer inContext:(CGContextRef)ctx
{
CGRect boundingBox = CGContextGetClipBoundingBox(ctx);
NSAttributedString *string = [[NSAttributedString alloc] initWithString:self.letter attributes:[self attrs]];
CGContextSaveGState(ctx);
CGContextSetShadowWithColor(ctx, CGSizeZero, 3.0, CGColorCreateGenericRGB(1.0, 1.0, 0.922, 1.0));
CTLineRef line = CTLineCreateWithAttributedString((CFAttributedStringRef)string);
CGRect rect = CTLineGetImageBounds(line, ctx);
CGFloat xOffset = CGRectGetMidX(rect);
CGFloat yOffset = CGRectGetMidY(rect);
CGPoint pos = CGPointMake(CGRectGetMidX(boundingBox) - xOffset, CGRectGetMidY(boundingBox)- yOffset);
CGContextSetTextPosition(ctx, pos.x, pos.y);
CTLineDraw(line, ctx);
CGContextRestoreGState(ctx);
}
here's the attributes dictionary:
NSDictionary *attributes = [NSDictionary dictionaryWithObjectsAndKeys:
[NSFont fontWithName:#"GillSans-Bold" size:72.0], NSFontAttributeName,
[NSColor blackColor], NSForegroundColorAttributeName,
[NSNumber numberWithFloat:1.0], NSStrokeWidthAttributeName,
[NSColor blackColor], NSStrokeColorAttributeName,
style, NSParagraphStyleAttributeName, nil];
as is, the stroke does not draw, but the fill does.
if i comment out the stroke attributes in the dictionary, the fill draws.
i know this can't be right, but i can't find any reference to this problem.
is this a known issue when drawing text with a delegate ?
as the string is one character, i was following the doc example not using any framesetter machinery, but tried that anyway as a fix attempt without success.
in reading this question's answer, i realized that i needed to be using a negative number for the stroke value. i was thinking of the stroke being applied to the outside of the letter drawn by CTLineDraw, rather then inside the text shape.
i'm answering my own question, in case this should help anyone else with this misunderstanding, as i didn't see the referenced doc covering this.

How to create a clipping mask from an NSAttributedString?

I have an NSAttributedString which I would like to draw into a CGImage so that I can later draw the CGImage into an NSView. Here's what I have so far:
// Draw attributed string into NSImage
NSImage* cacheImage = [[NSImage alloc] initWithSize:NSMakeSize(w, h)];
[cacheImage lockFocus];
[attributedString drawWithRect:NSMakeRect(0, 0, width, height) options:0];
[cacheImage unlockFocus];
// Convert NSImage to CGImageRef
CGImageSourceRef source = CGImageSourceCreateWithData(
(CFDataRef)[cacheImage TIFFRepresentation], NULL);
CGImageRef img = CGImageSourceCreateImageAtIndex(source, 0, NULL);
I'm not using -[NSImage CGImageForProposedRect:context:hints] because my app must use the 10.5 sdk.
When I draw this into my NSView using CGContextDrawImage, it draws a transparent background around the text, causing whatever is behind the window to show through. I think I want to create a clipping mask, but I can't figure out how to do that.
It sounds like your blend mode it set up as Copy instead of SourceOver. Take a look at the CoreGraphics blend mode documentation.

How do I apply multiple color masks using CGImageCreateWithMaskingColors?

I'm a bit new to objective-c and even newer at programming with Quartz 2D, so apologies in advance! I've got a method where I would like to remove a handful of specific colors (not just one) from a UIImage.
When I run my project with just one color mask being applied, it works beautifully. Once I try stacking them, the 'whiteRef' comes out NULL. I've even tried modifying my method to take a color mask and then simply ran my method twice -feeding in the different colors masks- but still no go.
Any help with this is greatly appreciated!
- (UIImage *)doctorTheImage:(UIImage *)originalImage
{
const float brownsMask[6] = {124, 255, 68, 222, 0, 165};
const float whiteMask[6] = {255, 255, 255, 255, 255, 255};
UIImageView *imageView = [[UIImageView alloc] initWithImage:originalImage];
UIGraphicsBeginImageContext(originalImage.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGImageRef brownRef = CGImageCreateWithMaskingColors(imageView.image.CGImage, brownsMask);
CGImageRef whiteRef = CGImageCreateWithMaskingColors(brownRef, whiteMask);
CGContextDrawImage (context, CGRectMake(0, 0, imageView.image.size.width, imageView.image.size.height), whiteRef);
CGImageRelease(brownRef);
CGImageRelease(whiteRef);
UIImage *doctoredImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[imageView release];
return doctoredImage;
}
Well I found a good work around! Basically, I was ultimately using the image data over a MKMapView, so all I needed to do was break down the image to it's pixels and from there I could mess around as a I please. It may not be the best option in terms of speed, but does the trick. Here is a sample of the code I'm now using.
//Split the images into pixels to mess with the image pixel colors individually
size_t bufferLength = gridWidth * gridHeight * 4;
unsigned char *rawData = nil;
rawData = (unsigned char *)[self convertUIImageToBitmapRGBA8:myUIImage];
grid = malloc(sizeof(float)*(bufferLength/4));
NSString *hexColor = nil;
for (int i = 0 ; i < (bufferLength); i=i+4)
{
hexColor = [NSString stringWithFormat: #"%02x%02x%02x", (int)(rawData[i + 0]),(int)(rawData[i + 1]),(int)(rawData[i + 2])];
//mess with colors how you see fit - I just detected certain colors and slapped
//that into an array of floats which I later put over my mapview much like the
//hazardmap example from apple.
if ([hexColor isEqualToString:#"ff0299"]) //pink
value = (float)11;
if ([hexColor isEqualToString:#"9933cc"]) //purple
value = (float)12;
//etc...
grid[i/4] = value;
}
I also borrowed some methods (ie: convertUIImageToBitmapRGBA8) from here: https://gist.github.com/739132
Hope this helps someone out!

Retrieving CGImage from NSView

I am trying to create CGImage from NSTextField.
I got some success in this. Still I cant get CGImage that consisting of only text. I mean to say that,every time capturing the textfield I am getting color of the background window along with it.(Looks like I am not getting alpha channel info)
I tried following snippet from http://www.cocoadev.com/index.pl?ConvertNSImageToCGImage
NSBitmapImageRep * bm = [NSBitmapImageRep alloc];
[theView lockFocus];
[bitmap initWithFocusedViewRect:[theView bounds]];
[theView unlockFocus]
[bma retain];// data provider will release this
int rowBytes, width, height;
rowBytes = [bm bytesPerRow];
width = [bm pixelsWide];
height = [bm pixelsHigh];
CGDataProviderRef provider = CGDataProviderCreateWithData( bm, [bm bitmapData], rowBytes * height, BitmapReleaseCallback );
CGColorSpaceRef colorspace = CGColorSpaceCreateWithName( kCGColorSpaceGenericRGB );
CGBitmapInfo bitsInfo = kCGImageAlphaPremultipliedLast;
CGImageRef img = CGImageCreate( width, height, 8, 32, rowBytes, colorspace, bitsInfo, provider, NULL, NO, kCGRenderingIntentDefault );
CGDataProviderRelease( provider );
CGColorSpaceRelease( colorspace );
return img;
Any help to get CGImage without background color?
-initWithFocusedViewRect: reads from the window backing store, so essentially it's a screenshot of that portion of the window. That's why you're getting the window background color in your image.
-[NSView cacheDisplayInRect:toBitmapImageRep:] is very similar, but it causes the view and its subviews, but not its superviews, to redraw themselves. If your text field is borderless, then this might suffice for you. (Make sure to use -bitmapImageRepForCachingDisplayInRect: to create your NSBitmapImageRep!)
There's one more option that might be considered even more correct than the above. NSTextField draws its content using its NSTextFieldCell. There's nothing really stopping you from just creating an image with the appropriate size, locking focus on it, and then calling -drawInteriorWithFrame:inView:. That should just draw the text, exactly as it was drawn in the text field.
Finally, if you just want to draw text, don't forget about NSStringDrawing. NSString has some methods that will draw with attributes (drawAtPoint:withAttributes:), and NSAttributedString also has drawing methods (drawAtPoint:). You could use one of those instead of asking the NSTextFieldCell to draw for you.

Resources