I am implementing a custom text layout algorithm on MAC OS X using CoreText. I have to partially render a portion of CTRun in different locations inside a custom NSView subclass object.
Here is my implementation of drawRect: method
- (void)drawRect:(NSRect)dirtyRect {
// Drawing code here.
CGContextRef context =
(CGContextRef)[[NSGraphicsContext currentContext] graphicsPort];
CGContextSaveGState(context); {
[[NSColor whiteColor] set];
NSRectFill(dirtyRect);
CTFontRef font = CTFontCreateWithName(CFSTR("Menlo"), 20, &CGAffineTransformIdentity);
CFTypeRef values[] = {font};
CFStringRef keys[] = {kCTFontAttributeName};
CFDictionaryRef dictionary =
CFDictionaryCreate(NULL,
(const void **)&keys,
(const void **)&values,
sizeof(keys) / sizeof(keys[0]),
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
CFAttributedStringRef longString =
CFAttributedStringCreate(kCFAllocatorDefault, CFSTR("this_is_a_very_long_string_that_compromises_many_glyphs,we_wil_see_it:)"), dictionary);
CTLineRef lineRef = CTLineCreateWithAttributedString(longString);
CFArrayRef runsArray = CTLineGetGlyphRuns(lineRef);
CTRunRef run = (CTRunRef)CFArrayGetValueAtIndex(runsArray, 0);
CGAffineTransform textTransform = CGAffineTransformIdentity;
textTransform = CGAffineTransformScale(textTransform, 1.0, -1.0);
CGContextSetTextMatrix(context, textTransform);
CGAffineTransform sequenceTransform =
CGAffineTransformIdentity;
sequenceTransform = CGAffineTransformTranslate(sequenceTransform, 0, 23.2818);
CGPoint firstPoint = CGPointApplyAffineTransform(CGPointMake(0, 0), sequenceTransform);
CFRange firstRange = CFRangeMake(0, 24);
CGContextSetTextPosition(context, firstPoint.x, firstPoint.y);
CTRunDraw(run, context, firstRange);
CGPoint secondPoint = CGPointApplyAffineTransform(CGPointMake(0, 26.2812), sequenceTransform);
CFRange secondRange = CFRangeMake(24, 24);
CGContextSetTextPosition(context, secondPoint.x, secondPoint.y);
CTRunDraw(run, context, secondRange);
CGPoint thirdPoint = CGPointApplyAffineTransform(CGPointMake(0, 52.5625), sequenceTransform);
CFRange thirdRange = CFRangeMake(48, 23);
CGContextSetTextPosition(context, thirdPoint.x, thirdPoint.y);
CTRunDraw(run, context, thirdRange);
}
CGContextRestoreGState(context);
}
Here is the output of this code
https://docs.google.com/open?id=0B8df1OdxKw4FYkE5Z1d1VUZQYWs
The problem is CTRunDraw() method inserts blank spaces on the positions other than the range specified.
What i want is it should render the part of run at its correct position.
Here is the correct out put which i want.(The correct output is photoshop of original output).
https://docs.google.com/open?id=0B8df1OdxKw4FcFRnS0p1cFBfa28
Note: I am using flipped coordinate system in my custom NSView subclass.
- (BOOL)isFlipped {
return YES;
}
You're misusing CTRun here. A CTRun is a horizontal box containing laid-out glyphs. It doesn't really make sense to try to draw portions of it underneath one another (certainly such typesetting would be wrong in any even slightly complicated case). Why? Well, because there are situations where the glyphs that are chosen might differ if there was a line break at a given position (this can happen, for instance, with ligatures). Also, note that there is not necessarily a 1:1 mapping between characters and glyphs.
My guess is that you probably don’t need your own full custom typesetter (and trust me, writing one is complicated, so if you don’t need one, don’t write one). Instead, just use CTTypesetter and/or CTFramesetter to get the output you want.
Related
I want to create a color picker. So I thought using NSReadPixel would be a good approach to receive the pixels’ color. So what I basically did was this:
class CustomWindowController: NSWindowController {
override func mouseMoved(with event: NSEvent) {
let mouseLocation = NSEvent.mouseLocation()
let pickedColor = NSReadPixel(mouseLocation)
}
}
But pickedColor always returns nil. Even if I try to "readPixel" with a fixed point (for testing purposes) it still returns nil. What am I missing?
EDIT #1
I’ve followed the NSBitmapImageRep / colorAt approach from the answers and noticed that the resulting NSColor seems to be a bit different (in most cases brighter) that it should be (take a look at the screenshot). Do I have to consider colorSpaces or so? (and how?)
EDIT #2
Got it work - bitmap.colorSpaceName = NSDeviceRGBColorSpace does the trick.
It really doesn't work. I use this code for getting pixel color
NSPoint _point = [NSEvent mouseLocation];
CGFloat x = floor(_point.x);
CGFloat y = [NSScreen mainScreen].frame.size.height - floor(_point.y);
//it needs because AppKit and CoreGraphics use different coordinat systems
CGWindowID windowID = (CGWindowID)[self windowNumber];
CGImageRef pixel = CGWindowListCreateImage(CGRectMake(x, y, 1, 1), kCGWindowListOptionOnScreenBelowWindow, windowID, kCGWindowImageNominalResolution);
NSBitmapImageRep *bitmap = [[NSBitmapImageRep alloc] initWithCGImage:pixel];
CGImageRelease(pixel);
NSColor *color = [bitmap colorAtX:0 y:0];
Core Graphics has two functions, CGPathCreateCopyByDashingPath() and CGPathCreateCopyByStrokingPath(), that both take a CGPath that you want to stroke and convert it into its equivalent fill. I need to do this so I can, for example, stroke a line with a gradient: call CGPathCreateCopyByStrokingPath(), load the path into the CGContext, call CGContextClip(), and then draw the gradient.
However, CGPathCreateCopyByStrokingPath() accepts line stroking parameters like line cap, line join, etc., while CGPathCreateCopyByDashingPath() does not. I would like to be able to dash with a custom line cap/join.
In particular, both functions have the following in their documentations:
The new path is created so that filling the new path draws the same pixels as stroking the original path with the specified dash parameters.
The new path is created so that filling the new path draws the same pixels as stroking the original path.
Emphasis mine. So what I take out of this is that once you call either function, you get a new path consisting of the lines that bound the requested stroke. So if I call ByDashing and then ByStroking, the first will create a path consisting of a bunch of little rectangles, and then the second will make the rectangles that form the perimeter lines of those little rectangles, which is not what I want. (I can test this and post pictures later.)
Everything I've seen points to Core Graphics being able to do this directly with a CGContext; for instance, the Programming with Quartz book shows round and square line caps in its dashing example. Is there any reason I can't do that with a standalone CGPath?
Am I missing something? Or am I just stuck with this?
This is for OS X, not for iOS.
Thanks.
I'm not sure what the problem is, really. You have a path to start with; let's call it A. You call CGPathCreateCopyByDashingPath() to make a new path from A that has the dashing you want; let's call that B. B does not have particular line caps/joins set on it, because that is not a property of a path, but rather properties used when stroking a path. (Imagine hand-making a dashed path by drawing line segments from point to point; nowhere in that path description is there any concept of caps or joins, just start and end points for each segment.) Then take B and call CGPathCreateCopyByStrokingPath() on it to get C, a fillable path for the stroke of B using particular line width/cap/join characteristics. Finally, fill C using your gradient fill. Does that not work? It seems like you know about all the components that you need to solve your problem, so I'm not sure where the problem actually lies; can you clarify?
Turns out the documentation for CGPathCreateCopyByDashingPath() is wrong.
Right now, it says
The new path is created so that filling the new path draws the same pixels as stroking the original path with the specified dash parameters.
This implies that it produces the resultant path with default stroke parameters. But it doesn't! Instead, you get a new path that is just the existing path broken up by the dashing parameters. You will need to call CGPathCreateCopyByStrokingPath() to produce the path to fill instead.
The following program has three sections. First it shows what the path should look like by drawing with CGContext functions instead of CGPath functions. Second, it draws with only CGPathCreateCopyByDashingPath(). Notice how stroking the path doesn't produce a bunch of boxes where the dash used to be, but a bunch of dashes. If you look closely, you'll see a very small blue ill where the line joins are. Finally, it calls CGPathCreateCopyByDashingPath() followed by CGPathCreateCopyByStrokingPath(), and you will see that filling that produces the correct output.
Thanks again, bhaller! I'm not sure what the documentation shoul be changed to, or how to request such a change.
// 15 october 2015
#import <Cocoa/Cocoa.h>
#interface dashStrokeView : NSView
#end
void putstr(CGContextRef c, const char *str, double x, double y)
{
NSFont *sysfont;
CFStringRef string;
CTFontRef font;
CFStringRef keys[1];
CFTypeRef values[1];
CFDictionaryRef attrs;
CFAttributedStringRef attrstr;
CTLineRef line;
sysfont = [NSFont systemFontOfSize:[NSFont systemFontSizeForControlSize:NSRegularControlSize]];
font = (CTFontRef) sysfont; // toll-free bridge
string = CFStringCreateWithCString(kCFAllocatorDefault,
str, kCFStringEncodingUTF8);
keys[0] = kCTFontAttributeName;
values[0] = font;
attrs = CFDictionaryCreate(kCFAllocatorDefault,
keys, values,
1,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
attrstr = CFAttributedStringCreate(kCFAllocatorDefault, string, attrs);
line = CTLineCreateWithAttributedString(attrstr);
CGContextSetTextPosition(c, x, y);
CTLineDraw(line, c);
CFRelease(line);
CFRelease(attrstr);
CFRelease(attrs);
CFRelease(string);
}
#implementation dashStrokeView
- (void)drawRect:(NSRect)r
{
CGContextRef c;
CGFloat lengths[2] = { 10, 13 };
CGMutablePathRef buildpath;
CGPathRef copy, copy2;
c = (CGContextRef) [[NSGraphicsContext currentContext] graphicsPort];
CGContextSaveGState(c);
putstr(c, "Dash + Stroke With CGContext Functions", 10, 10);
CGContextMoveToPoint(c, 50, 50);
CGContextAddLineToPoint(c, 100, 30);
CGContextAddLineToPoint(c, 150, 70);
CGContextAddLineToPoint(c, 200, 50);
CGContextSetLineWidth(c, 10);
CGContextSetLineJoin(c, kCGLineJoinBevel);
CGContextSetLineCap(c, kCGLineCapRound);
CGContextSetLineDash(c, 0, lengths, 2);
CGContextSetRGBStrokeColor(c, 0, 0, 0, 1);
CGContextStrokePath(c);
// and reset
CGContextSetLineWidth(c, 1);
CGContextSetLineJoin(c, kCGLineJoinMiter);
CGContextSetLineCap(c, kCGLineCapButt);
CGContextSetLineDash(c, 0, NULL, 0);
CGContextTranslateCTM(c, 0, 100);
putstr(c, "Dash With CGPath Functions", 10, 10);
buildpath = CGPathCreateMutable();
CGPathMoveToPoint(buildpath, NULL, 50, 50);
CGPathAddLineToPoint(buildpath, NULL, 100, 30);
CGPathAddLineToPoint(buildpath, NULL, 150, 70);
CGPathAddLineToPoint(buildpath, NULL, 200, 50);
copy = CGPathCreateCopyByDashingPath(buildpath, NULL, 0, lengths, 2);
CGContextAddPath(c, copy);
CGContextStrokePath(c);
CGContextAddPath(c, copy);
CGContextSetRGBFillColor(c, 0, 0.25, 0.5, 1);
CGContextFillPath(c);
CGPathRelease(copy);
CGPathRelease((CGPathRef) buildpath);
CGContextTranslateCTM(c, 0, 100);
putstr(c, "Dash + Stroke With CGPath Functions", 10, 10);
buildpath = CGPathCreateMutable();
CGPathMoveToPoint(buildpath, NULL, 50, 50);
CGPathAddLineToPoint(buildpath, NULL, 100, 30);
CGPathAddLineToPoint(buildpath, NULL, 150, 70);
CGPathAddLineToPoint(buildpath, NULL, 200, 50);
copy = CGPathCreateCopyByDashingPath(buildpath, NULL, 0, lengths, 2);
copy2 = CGPathCreateCopyByStrokingPath(copy, NULL, 10, kCGLineCapRound, kCGLineJoinBevel, 10);
CGContextAddPath(c, copy2);
CGContextSetRGBFillColor(c, 0, 0.25, 0.5, 1);
CGContextFillPath(c);
CGContextAddPath(c, copy2);
CGContextStrokePath(c);
CGPathRelease(copy2);
CGPathRelease(copy);
CGPathRelease((CGPathRef) buildpath);
CGContextRestoreGState(c);
}
- (BOOL)isFlipped
{
return YES;
}
#end
#interface appDelegate : NSObject<NSApplicationDelegate>
#end
#implementation appDelegate
- (void)applicationDidFinishLaunching:(NSNotification *)note
{
NSWindow *mainwin;
NSView *contentView;
dashStrokeView *view;
NSDictionary *views;
NSArray *constraints;
mainwin = [[NSWindow alloc] initWithContentRect: NSMakeRect(0, 0, 320, 360)
styleMask:(NSTitledWindowMask | NSClosableWindowMask | NSMiniaturizableWindowMask | NSResizableWindowMask)
backing:NSBackingStoreBuffered
defer:YES];
[mainwin setTitle:#"Dash/Stroke Example"];
contentView = [mainwin contentView];
view = [[dashStrokeView alloc] initWithFrame:NSZeroRect];
[view setTranslatesAutoresizingMaskIntoConstraints:NO];
[contentView addSubview:view];
views = NSDictionaryOfVariableBindings(view);
constraints = [NSLayoutConstraint constraintsWithVisualFormat:#"H:|-[view]-|"
options:0
metrics:nil
views:views];
[contentView addConstraints:constraints];
constraints = [NSLayoutConstraint constraintsWithVisualFormat:#"V:|-[view]-|"
options:0
metrics:nil
views:views];
[contentView addConstraints:constraints];
[mainwin cascadeTopLeftFromPoint:NSMakePoint(20, 20)];
[mainwin makeKeyAndOrderFront:nil];
}
- (BOOL)applicationShouldTerminateAfterLastWindowClosed:(NSApplication *)app
{
return YES;
}
#end
int main(void)
{
NSApplication *app;
app = [NSApplication sharedApplication];
[app setActivationPolicy:NSApplicationActivationPolicyRegular];
[app setDelegate:[appDelegate new]];
[app run];
return 0;
}
Here is my problem:
I use Core Data to store rich text input from iOS and/or OS X apps and would like images pasted into the NSTextView or UITextView to:
a) retain their original resolution, and
b) on display to be scaled to fit the textView correctly, which means scaling based on the size of the view on the device.
Currently I am using - (void)textStorage:(NSTextStorage *)textStorage didProcessEditing:(NSTextStorageEditActions)editedMask range:(NSRange)editedRange changeInLength:(NSInteger)delta to look for attachments and to then generate an image with a scale factor and assigning it to the textAttachment.image attribute.
This kind of works because I just change the scale factor and the original image gets retained but I believe a more elegant solution would be to use a NSTextAttachmentContainer subclass and to return from this an appropriately sized CGREct with
- (CGRect)attachmentBoundsForTextContainer:(NSTextContainer *)textContainer proposedLineFragment:(CGRect)lineFrag glyphPosition:(CGPoint)position characterIndex:(NSUInteger)charIndex
So my question is how do I create and insert such a subclass ?
Do I use the textStorage:didProcessEditing to iterate over each attachment and replace its NSTextAttachmentContainer with a class of my own, or can I simply create a Category and then somehow use this category to change the default behaviour. The latter seems much less intrusive but how do I get my textViews to automatically use this Category?
Oops: Just noticed NSTextAttachmentContainer is a protocol so I assume then creating a Category on NSTextAttachment and overriding the method above is an option.
Mmm: can't use Category to override an existing class method so I guess subclassing is the only option in which case how do I get the UITextView to use my attachment subclass, or do I have to iterate over the attributedString to replace all NSTextAttachments with instances of MYTextAttachment. And what will be the impact of unarchiving this string on OS X into say the default OS X NSTextAttachment (which is different from the iOS class) ?
Based on this excellent article, if you want to make use of
- (CGRect)attachmentBoundsForTextContainer:(NSTextContainer *)textContainer proposedLineFragment:(CGRect)lineFrag glyphPosition:(CGPoint)position characterIndex:(NSUInteger)charIndex
to scale an image text attachment, you have to create your own subclass of NSTextAttachment
#interface MYTextAttachment : NSTextAttachment
#end
with the scale operation in the implementation:
#implementation MYTextAttachment
- (CGRect)attachmentBoundsForTextContainer:(NSTextContainer *)textContainer proposedLineFragment:(CGRect)lineFrag glyphPosition:(CGPoint)position characterIndex:(NSUInteger)charIndex {
CGFloat width = lineFrag.size.width;
// Scale how you want
float scalingFactor = 1.0;
CGSize imageSize = [self.image size];
if (width < imageSize.width)
scalingFactor = width / imageSize.width;
CGRect rect = CGRectMake(0, 0, imageSize.width * scalingFactor, imageSize.height * scalingFactor);
return rect;
}
#end
based on
lineFrag.size.width
which give you (or what I have understood as) the width taken by the textView on which you have (will) set the attributed text "embedding" your custom text attachment.
Once the subclass of NSTextAttachment created, all you have to do is make use of it. Create an instance of it, set an image, then create a new attributed string with it and append it to a NSMutableAttributedText per example:
MYTextAttachment* _textAttachment = [MYTextAttachment new];
_textAttachment.image = [UIImage ... ];
[_myMutableAttributedString appendAttributedString:[NSAttributedString attributedStringWithAttachment:_immediateTextAttachment]];
For info it seems that
- (CGRect)attachmentBoundsForTextContainer:(NSTextContainer *)textContainer proposedLineFragment:(CGRect)lineFrag glyphPosition:(CGPoint)position characterIndex:(NSUInteger)charIndex
is called whenever the textview is asked to be relayout-ed.
Hope it helps, even though it doesn't answer every aspect of your problem.
Swift 3 (based on #Bluezen's answer):
class MyTextAttachment : NSTextAttachment {
override func attachmentBounds(for textContainer: NSTextContainer?, proposedLineFragment lineFrag: CGRect, glyphPosition position: CGPoint, characterIndex charIndex: Int) -> CGRect {
guard let image = self.image else {
return CGRect.zero
}
let height = lineFrag.size.height
// Scale how you want
var scalingFactor = CGFloat(0.8)
let imageSize = image.size
if height < imageSize.height {
scalingFactor *= height / imageSize.height
}
let rect = CGRect(x: 0, y: 0, width: imageSize.width * scalingFactor, height: imageSize.height * scalingFactor)
return rect
}
}
Note that I am scaling based on height, as I was getting a lineFrag width value of 10000000 in my particular use case. Also note that I replaced scalingFactor = ... with scalingFactor *= ... so that I could use an additional, non-unity scaling factor (0.8 in this case).
I'm a bit new to objective-c and even newer at programming with Quartz 2D, so apologies in advance! I've got a method where I would like to remove a handful of specific colors (not just one) from a UIImage.
When I run my project with just one color mask being applied, it works beautifully. Once I try stacking them, the 'whiteRef' comes out NULL. I've even tried modifying my method to take a color mask and then simply ran my method twice -feeding in the different colors masks- but still no go.
Any help with this is greatly appreciated!
- (UIImage *)doctorTheImage:(UIImage *)originalImage
{
const float brownsMask[6] = {124, 255, 68, 222, 0, 165};
const float whiteMask[6] = {255, 255, 255, 255, 255, 255};
UIImageView *imageView = [[UIImageView alloc] initWithImage:originalImage];
UIGraphicsBeginImageContext(originalImage.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGImageRef brownRef = CGImageCreateWithMaskingColors(imageView.image.CGImage, brownsMask);
CGImageRef whiteRef = CGImageCreateWithMaskingColors(brownRef, whiteMask);
CGContextDrawImage (context, CGRectMake(0, 0, imageView.image.size.width, imageView.image.size.height), whiteRef);
CGImageRelease(brownRef);
CGImageRelease(whiteRef);
UIImage *doctoredImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[imageView release];
return doctoredImage;
}
Well I found a good work around! Basically, I was ultimately using the image data over a MKMapView, so all I needed to do was break down the image to it's pixels and from there I could mess around as a I please. It may not be the best option in terms of speed, but does the trick. Here is a sample of the code I'm now using.
//Split the images into pixels to mess with the image pixel colors individually
size_t bufferLength = gridWidth * gridHeight * 4;
unsigned char *rawData = nil;
rawData = (unsigned char *)[self convertUIImageToBitmapRGBA8:myUIImage];
grid = malloc(sizeof(float)*(bufferLength/4));
NSString *hexColor = nil;
for (int i = 0 ; i < (bufferLength); i=i+4)
{
hexColor = [NSString stringWithFormat: #"%02x%02x%02x", (int)(rawData[i + 0]),(int)(rawData[i + 1]),(int)(rawData[i + 2])];
//mess with colors how you see fit - I just detected certain colors and slapped
//that into an array of floats which I later put over my mapview much like the
//hazardmap example from apple.
if ([hexColor isEqualToString:#"ff0299"]) //pink
value = (float)11;
if ([hexColor isEqualToString:#"9933cc"]) //purple
value = (float)12;
//etc...
grid[i/4] = value;
}
I also borrowed some methods (ie: convertUIImageToBitmapRGBA8) from here: https://gist.github.com/739132
Hope this helps someone out!
I need to draw centered text to a CGContext.
I started with a Cocoa approach. I created a NSCell with the text and tried to draw it thus:
NSGraphicsContext* newCtx = [NSGraphicsContext
graphicsContextWithGraphicsPort:bitmapContext flipped:true];
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:newCtx];
[pCell setFont:font];
[pCell drawWithFrame:rect inView:nil];
[NSGraphicsContext restoreGraphicsState];
But the CGBitmapContext doesn't seem to have the text rendered on it. Possibly because I have to pass nil for the inView: parameter.
So I tried switching text rendering to Core Graphics:
The simplest way seems to be to use CGContextSelectFont to select a font using its postscript name, and point size, but CGContextShowTextAtPoint only takes non-unicode characters, and there is no apparent way to fit the text to a rectangle: or to compute the extents of a line of text to manually lay out the rectangle.
Then, there is a CGFont that can be created, and set cia CGContextSetFont. Drawing this text requires CGContextShowGlyphsAtPoint, but again the CGContext seems to be lacking functions to compute the bounding rect of generated text, or to wrap the text to a rect. Plus how to transform a string to an array of CGGlyphs is not obvious.
The next option is to try using CoreText to render the string. But the Core Text classes are hugely complicated, and while there are samples that show how to display text, in a specified font, in a rect, there are no samples demonstrating how to compute the bounding rect of a CoreText string.
So:
Given a CGFont a CGContext how do I compute the bounding rect of some text, and how do I transform a text string into an array of CGGlyphs?
Given a string, a CGContext and a postscript name and point size, what Core Text objects do I need to create to compute the bounding rect of the string, and/or draw the string wrapped to a rect on the CGContext.
Given a string and a NSFont - how do I render the string onto a CGBitmapContext? I already know how to get its extents.
I finally managed to find answers after 4 days of searching. I really wish Apple makes better documentation. So here we go
I assume you already have CGFontRef with you. If not let me know I will tell you how to load a ttf from resource bundle into CgFontRef.
Below is the code snippet to compute the bounds of any string with any CGFontref
int charCount = [string length];
CGGlyph glyphs[charCount];
CGRect rects[charCount];
CTFontGetGlyphsForCharacters(theCTFont, (const unichar*)[string cStringUsingEncoding:NSUnicodeStringEncoding], glyphs, charCount);
CTFontGetBoundingRectsForGlyphs(theCTFont, kCTFontDefaultOrientation, glyphs, rects, charCount);
int totalwidth = 0, maxheight = 0;
for (int i=0; i < charCount; i++)
{
totalwidth += rects[i].size.width;
maxheight = maxheight < rects[i].size.height ? rects[i].size.height : maxheight;
}
dim = CGSizeMake(totalwidth, maxheight);
Reuse the same function CTFontGetGlyphsForCharacters to get the glyphs. To get a CTFontRef from a CGFontRef use CTFontCreateWithGraphicsFont() function
Also remember NSFont and CGFontRef are toll-free bridged, meaning they can casted into each other and they will work seamlessly without any extra work.
I would continue with your above approach but use NSAttributedString instead.
NSGraphicsContext* newCtx = [NSGraphicsContext graphicsContextWithGraphicsPort:bitmapContext flipped:true];
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:newCtx];
NSAttributedString *string = /* make a string with all of the desired attributes */;
[string drawInRect:locationToDraw];
[NSGraphicsContext restoreGraphicsState];
Swift 5 version!
let targetSize: CGSize = // the space you have available for drawing the text
let origin: CGPoint = // where you want to position the top-left corner
let string: String = // your string
let font: UIFont = // your font
let attrs: [NSAttributedString.Key:Any] = [.font: font]
let boundingRect = string.boundingRect(with: targetSize, options: [.usesLineFragmentOrigin], attributes: attrs, context: nil)
let textRect = CGRect(origin: origin, size: boundingRect.size)
text.draw(with: textRect, options: [.usesLineFragmentOrigin], attributes: attrs, context: nil)
A complete sample code (Objective-C):
// Need to create pdf Graphics context for Drawing text
CGContextRef pdfContextRef = NULL;
CFURLRef writeFileUrl = (CFURLRef)[NSURL fileURLWithPath:writeFilePath];
if(writeFileUrl != NULL){
pdfContextRef = CGPDFContextCreateWithURL(writeFileUrl, NULL, NULL);
}
// Start page in PDF context
CGPDFContextBeginPage(pdfContextRef, NULL);
NSGraphicsContext* pdfGraphicsContext = [NSGraphicsContext graphicsContextWithCGContext:pdfContextRef flipped:false];
[NSGraphicsContext saveGraphicsState];
// Need to set the current graphics context
[NSGraphicsContext setCurrentContext:pdfGraphicsContext];
NSDictionary *attributes = [NSDictionary dictionaryWithObjectsAndKeys:[NSFont fontWithName:#"Helvetica" size:26], NSFontAttributeName,[NSColor blackColor], NSForegroundColorAttributeName, nil];
NSAttributedString * currentText=[[NSAttributedString alloc] initWithString:#"Write Something" attributes: attributes];
// [currentText drawInRect: CGRectMake(0, 300, 500, 100 )];
[currentText drawAtPoint:NSMakePoint(100, 100)];
[NSGraphicsContext restoreGraphicsState];
// Close all the created pdf Contexts
CGPDFContextEndPage(pdfContextRef);
CGPDFContextClose(pdfContextRef);