How can I calculate (without search) a font size to fit a rect? - cocoa

I want my text to fit within a specific rect, so I need something to determine a font size. Questions have already tackled this to an extent, but they do a search, which seems horribly inefficient, especially if you want to be able to calculate during a live dragging resize. The following example could be improved to binary search and by constraining to the height, but it is still a search. Instead of searching, how can I calculate a font size to fit a rect?
#define kMaxFontSize 10000
- (CGFloat)fontSizeForAreaSize:(NSSize)areaSize withString:(NSString *)stringToSize usingFont:(NSString *)fontName;
{
NSFont * displayFont = nil;
NSSize stringSize = NSZeroSize;
NSMutableDictionary * fontAttributes = [[NSMutableDictionary alloc] init];
if (areaSize.width == 0.0 && areaSize.height == 0.0)
return 0.0;
NSUInteger fontLoop = 0;
for (fontLoop = 1; fontLoop <= kMaxFontSize; fontLoop++) {
displayFont = [[NSFontManager sharedFontManager] convertWeight:YES ofFont:[NSFont fontWithName:fontName size:fontLoop]];
[fontAttributes setObject:displayFont forKey:NSFontAttributeName];
stringSize = [stringToSize sizeWithAttributes:fontAttributes];
if (stringSize.width > areaSize.width)
break;
if (stringSize.height > areaSize.height)
break;
}
[fontAttributes release], fontAttributes = nil;
return (CGFloat)fontLoop - 1.0;
}

Pick any font size and measure the text at that size. Divide each of its dimensions (width and height) by the same dimension of your target rectangle, then divide the font size by the larger factor.
Note that the text will measure on one line, since there is no maximum width for it to wrap to. For a long line/string, this may result in an unusefully-small font size. For a text field, you should simply enforce a minimum size (such as the small system font size), and set the field's truncation behavior. If you intend to wrap the text, you'll need to measure it with something that takes a bounding rectangle or size.
Code by asker roughly based on this idea:
-(float)scaleToAspectFit:(CGSize)source into:(CGSize)into padding:(float)padding
{
return MIN((into.width-padding) / source.width, (into.height-padding) / source.height);
}
-(NSFont*)fontSizedForAreaSize:(NSSize)size withString:(NSString*)string usingFont:(NSFont*)font;
{
NSFont* sampleFont = [NSFont fontWithDescriptor:font.fontDescriptor size:12.];//use standard size to prevent error accrual
CGSize sampleSize = [string sizeWithAttributes:[NSDictionary dictionaryWithObjectsAndKeys:sampleFont, NSFontAttributeName, nil]];
float scale = [self scaleToAspectFit:sampleSize into:size padding:10];
return [NSFont fontWithDescriptor:font.fontDescriptor size:scale * sampleFont.pointSize];
}
-(void)windowDidResize:(NSNotification*)notification
{
text.font = [self fontSizedForAreaSize:text.frame.size withString:text.stringValue usingFont:text.font];
}

Related

Scale NSString Font to UIImage Size

I'm writing text onto a image taken from the iDevice's camera or chosen from the photo library but I need the fontsize scaled according to width/height of the image. Here's my current code:
UIGraphicsBeginImageContext(img.size);
CGRect aRectangle = CGRectMake(0,0, img.size.width, img.size.height);
[img drawInRect:aRectangle];
[[UIColor whiteColor] set]; // set text color
NSInteger fontSize = 45;
UIFont *font = [UIFont systemFontOfSize:fontSize];// set text font
[ text drawInRect : aRectangle // render the text
withFont : font
lineBreakMode : UILineBreakModeTailTruncation // clip overflow from end of last line
alignment : UITextAlignmentCenter ];
UIImage *theImage=UIGraphicsGetImageFromCurrentImageContext(); // extract the image
UIGraphicsEndImageContext(); // clean up the context.
return theImage;</i>
You want something like this. It creates a rect half the size of your image, then uses a binary search to zoom in on the font size you want.
//---- initial data
CGSize szMax = CGSizeMake(<imgWidth> / 2, <imgHeight> / 2);
CGFloat fMin = 2; //-- the smallest size font we want
CGFloat fMax = 100; //-- the largest size font we want
CGFloat fMid; //-- the middle of min and max
while ((fMax - fMin) >= 1.0) //-- repeat until the sizes converge
{
fMid = (fMin + fMax) / 2.0; //-- compute mid-point
UIFont *pfnt = [UIFont systemFontOfSize: fMid]; //-- create middle-sized font
//---- compute the size of the text using the mid-sized font
CGSize szStr = [pStr sizeWithFont: pfnt constrainedToSize: szMax lineBreakMode: UILineBreakModeWordWrap];
if (szStr.height > szMax.height)
fMax = fMid; //-- text too tall, set max to mid-point
else
fMin = fMid; //-- text too short, set min to mid-point
}
//---- 'fMid' is the size you want.

Changing size of NSTextView

I am trying to implement an NSTextView which resizes to fit its content. I am trying to make it so it has a maximum amount of lines. Meaning, that when this limit is reached, it will not grow any further and the user will be able to scroll it.
There seem to be a lot of people wanting this but I have not found a complete implementation of this.
The idea is to take the content size of the NSTextView and resize it so that it matches. I just can't figure out how to set the layout size of the NSTextView. It does not seem to be it's frame which needs to be set and which one might expect.
Can anyone please tell me how to set the frame of the NSTextView?
It turns out that it's the height of the scroll view which should be changed. Changing the height of an NSTextView can be done by overwriting the -didChangeText method like below.
- (void)didChangeText
{
NSScrollView *scrollView = (NSScrollView *) self.superview.superview;
NSRect frame = scrollView.frame;
// Calculate height
NSUInteger numberOfLines = [self numberOfLines];
NSUInteger height = frame.size.height;
if (numberOfLines <= 13)
{
height = 22 + (numberOfLines - 1) * 14;
if (height < 22)
height = 22;
}
// Update height
frame.size.height = height;
self.superview.superview.frame = frame;
}
Play around with the constants to get the result you wish for.

Precise pixel grid overlay in Core Graphics?

In my experiments with creating a pixel-centered image editor I've been trying to draw a precise grid overlay to help guide users when trying to access certain pixels. However, the grid I draw isn't very even, especially at smaller sizes. It's a regular pattern of one slightly larger column for every few normal columns, so I think it's a rounding issue, but I can't see it in my code. Here's my code:
- (void)drawRect:(NSRect)dirtyRect
{
context = [[NSGraphicsContext currentContext] graphicsPort];
CGContextAddRect(context, NSRectToCGRect(self.bounds));
CGContextSetRGBStrokeColor(context, 1.0f, 0.0f, 0.0f, 1.0f);
CGContextStrokePath(context);
CGContextSetInterpolationQuality(context, kCGInterpolationNone);
CGContextSetShouldAntialias(context, NO);
if (image)
{
NSRect imageRect = NSZeroRect;
imageRect.size = CGImageGetSize([image CGImage]);
drawRect = [self bounds];
NSRect viewRect = drawRect;
CGFloat aspectRatio = imageRect.size.width / imageRect.size.height;
if (viewRect.size.width / viewRect.size.height <= aspectRatio)
{
drawRect.size.width = viewRect.size.width;
drawRect.size.height = imageRect.size.height * (viewRect.size.width / imageRect.size.width);
}
else
{
drawRect.size.height = viewRect.size.height;
drawRect.size.width = imageRect.size.width * (viewRect.size.height / imageRect.size.height);
}
drawRect.origin.x += (viewRect.size.width - drawRect.size.width) / 2.0;
drawRect.origin.y += (viewRect.size.height - drawRect.size.height) / 2.0;
CGContextDrawImage(context, drawRect, [image CGImage]);
if (showPixelGrid)
{
//Draw grid by creating start and end points for vertical and horizontal lines.
//FIXME: Grid is uneven, especially at smaller sizes.
CGContextSetStrokeColorWithColor(context, CGColorGetConstantColor(kCGColorBlack));
CGContextAddRect(context, drawRect);
CGContextStrokePath(context);
NSUInteger numXPoints = (NSUInteger)imageRect.size.width * 2;
NSUInteger numYPoints = (NSUInteger)imageRect.size.height * 2;
CGPoint xPoints[numXPoints];
CGPoint yPoints[numYPoints];
CGPoint startPoint;
CGPoint endPoint;
CGFloat widthRatio = drawRect.size.width / imageRect.size.width;
CGFloat heightRatio = drawRect.size.height / imageRect.size.height;
startPoint.x = drawRect.origin.x;
startPoint.y = drawRect.origin.y;
endPoint.x = drawRect.origin.x;
endPoint.y = drawRect.size.height + drawRect.origin.y;
for (NSUInteger i = 0; i < numXPoints; i += 2)
{
startPoint.x += widthRatio;
endPoint.x += widthRatio;
xPoints[i] = startPoint;
xPoints[i + 1] = endPoint;
}
startPoint.x = drawRect.origin.x;
startPoint.y = drawRect.origin.y;
endPoint.x = drawRect.size.width + drawRect.origin.x;
endPoint.y = drawRect.origin.y;
for (NSUInteger i = 0; i < numYPoints; i += 2)
{
startPoint.y += heightRatio;
endPoint.y += heightRatio;
yPoints[i] = startPoint;
yPoints[i + 1] = endPoint;
}
CGContextStrokeLineSegments(context, xPoints, numXPoints);
CGContextStrokeLineSegments(context, yPoints, numYPoints);
}
}
}
Any ideas?
UPDATE: I managed to get your code running with a few tweaks - where did CGImageGetSize() come from? - and I can't really see the problem, other than columns aren't all exactly even at extremely small sizes. That's just how it has to work though. The only way around this is to either fix scaling to be integer multiples of the image size - in other words, get the largest integer multiple of the image size smaller than the view size -or reduce the number of lines drawn on the screen at very small sizes to get rid of this artefact. There's a reason the pixel grid only becomes visible when you zoom in a long way in most editors. Not to mention that if the grid is still visible at 3-4x resolution you're making the view just way too busy.
I couldn't run the code you provided because there's a bunch of class ivars in there, but from a cursory glance, I'd say it has something to do with drawing on pixel boundaries. After you round to an integer to get rid of fuzzy AA artefacts (I notice you turned AA off, but ideally you shouldn't have to do that), you then need to add 0.5 to your origin to get your line drawn in the center of the pixel rather than on the boundary.
Like this:
+---X---+---+---+---+---+
| | | | Y | | |
+---+---+---+---+---+---+
X : CGPoint (1, 1)
Y : CGPoint (3.5, 0.5)
You want to draw from the center of the pixel, because otherwise your line straddles two pixels.
In other words, where you're setting up xPoints and yPoints, make sure to floor() or round() your values, and then add 0.5.

drawRect performance

I need to draw lots of polygons 500k to a million on the iPad. After experimenting, I can only get only get 1 fps if that. This is just an example my real code has some good sized polygons.
Here are a few question:
Why don't I have to add the Quartz Framework to my project?
If many of the polygons repeat can I leverage that with views or are they too heavy etc?
Any alternatives, QTPaint can handle this but dips into the gpu. Is there is anything like QT or ios?
Can Opengl increase 2d performance of this type?
Example drawrect:
//X Y Array of boxes
- (void)drawRect:(CGRect)rect
{
int reset = [self pan].x;
int markX = reset;
int markY = [self pan].y;
CGContextRef context = UIGraphicsGetCurrentContext();
for(int i = 0; i < 1000; i++)//1,000,000
{
for(int j = 0; j < 1000; j++)
{
CGContextMoveToPoint(context, markX, markY);
CGContextAddLineToPoint(context, markX, markY + 10);
CGContextAddLineToPoint(context, markX + 10, markY + 10);
CGContextAddLineToPoint(context, markX + 10, markY);
CGContextAddLineToPoint(context, markX, markY);
CGContextStrokePath(context);
markX+=12;
}
markY += 12;
markX = reset;
}
}
The pan just move the array of boxes around on screen with pan gesture. Any help or hints would greatly appreciated.
The key issue with your example is that it is not optimized. Whenever drawRect: is called, the device is rendering all 1,000,000 squares. Worse still, it's making 6,000,000 calls to those APIs in the loop. If you want to refresh this view at even a modest 30fps, that is 180,000,000 calls / second.
With your 'simple' example, the size of the draw area is 12,000px × 12,000px; the maximum area you can display on the iPad's display is 768×1024 (assuming full-screen portrait). Therefore, the code is wasting a lot of CPU resources drawing outside the visible area. UIKit has ways of handling this scenario with relative ease.
When managing content that is significantly larger than the visible area, you should limit drawing to only that which is visible. UIKit has a couple of ways of handing this; UIScrollView in combination with a view backed by a CATiledLayer is your best bet.
Steps:
Disclaimer: This is specifically an optimization of your example code above
Create a new View Based Application iPad project
Add a reference to the QuartzCore.framework
Create a new class, say MyLargeView, subclassed from UIView and add the following code:
:
#import <QuartzCore/QuartzCore.h>
#implementation MyLargeView
- (void)awakeFromNib {
CATiledLayer *tiledLayer = (CATiledLayer *)[self layer];
tiledLayer.tileSize = CGSizeMake(512.0f, 512.0f);
}
// Set the layer's class to be CATiledLayer.
+ (Class)layerClass {
return [CATiledLayer class];
}
- (void)drawRect:(CGRect)rect {
// Drawing code
// only draws what is specified by the rect parameter
CGContextRef context = UIGraphicsGetCurrentContext();
// set up some constants for the objects being drawn
const CGFloat width = 10.0f; // width of rect
const CGFloat height = 10.0f; // height of rect
const CGFloat xSpace = 4.0f; // space between cells (horizontal)
const CGFloat ySpace = 4.0f; // space between cells (vertical)
const CGFloat tWidth = width + xSpace; // total width of cell
const CGFloat tHeight = height + ySpace;// total height of cell
CGFloat xStart = floorf(rect.origin.x / tWidth); // first visible cell (column)
CGFloat yStart = floorf(rect.origin.y / tHeight); // first visible cell (row)
CGFloat xCells = rect.size.width / tWidth + 1; // number of horizontal visible cells
CGFloat yCells = rect.size.height / tHeight + 1; // number of vertical visible cells
for(int x = xStart; x < (xStart + xCells); x++) {
for(int y = yStart; y < (yStart + yCells); y++) {
CGFloat xpos = x*tWidth;
CGFloat ypos = y*tHeight;
CGContextMoveToPoint(context, xpos, ypos);
CGContextAddLineToPoint(context, xpos, ypos + height);
CGContextAddLineToPoint(context, xpos + width, ypos + height);
CGContextAddLineToPoint(context, xpos + width, ypos);
CGContextAddLineToPoint(context, xpos, ypos);
CGContextStrokePath(context);
}
}
}
#end
Edit the view controller nib and add a UIScrollView to the view
Add a UIView to the UIScrollView and make sure it fills the UIScrollView
Change the class to MyLargeView
Set frame size of MyLargeView to 12,000×12,000
Finally, open up the view controller .m file and add the following override:
:
// Implement viewDidLoad to do additional setup after loading the view, typically from a nib.
- (void)viewDidLoad {
[super viewDidLoad];
UIScrollView *scrollView = [self.view.subviews objectAtIndex:0];
scrollView.contentSize = CGSizeMake(12000, 12000);
}
If you look at the drawRect: call, it is only drawing into the area specified by the rect parameter, which will correspond to the tile size (512×512) for the CATiledLayer we configured in the awakeFromNib method. This will scale to a 1,000,000×1,000,000 pixel canvas.
Alternatives to look at are the ScrollViewSuite example, specifically 3_Tiling.
OpenGL is GPU hardware accelerated on iOS devices. Core Graphics drawing is not, and can be many many times slower when dealing with a large number of small graphics primitives (lines).
For lots of small squares, just writing them into a bitmap in C code is faster than Core Graphics line drawing. Then just draw the bitmap to the view once when done. But Open GL would be even faster.
point 4. OpenGL should do that fine. Check if you could reuse those objects and whether you could move some of the logic to GLSL code.
OpenGL performance optimization (in context of WebGL but most of it should apply): http://www.youtube.com/watch?v=rfQ8rKGTVlg
I don't know the details of iOS history so this may not have been an option when the question was first posted. However, I wanted to call out CAShapeLayer as a simple option when dealing with path performance problems. "iOS Core Animation: Advanced Techniques" (find it on Google Books) says CAShapeLayer "uses hardware-accelerated drawing" which I'm taking to mean that it's a GPU-based implementation. The same book has a good usage example in chapter 6, which boils down to this:
Create a CAShapeLayer
Configure its lineWidth, fillColor, strokeColor, etc.
Add the layer as a sublayer of your view's containerView.layer
To draw a path, just set it to the layer's "path" property
This made a gigantic performance difference in my app, as measured by Instruments. If your performance problem is path-based, don't wade into OpenGL before you've tried CAShapeLayer.
I encountered the same problem. After endless searching on google,CAShapeLayer saved me finally! Here is the detail steps you need to do:
Create a view with CAShapeLayer as it's layer type by override UIView's + (Class)layerClass method
Configure the layer's lineWidth, fillColor, strokeColor, etc.
Create an UIBezierPath instance
To draw a path,use UIBezierPath instance to add lines,curve,or acr etc, after you finished drawing, just set bezierPath.CGPath to the
layer's "path" property
Here is a simple demo to draw a simple curve when you touch the demo view:
//Simple ShapelayerView.m
-(instancetype)init {
self = [super init];
if (self) {
_bezierPath = [UIBezierPath bezierPath];
CAShapeLayer *shapeLayer = (CAShapeLayer *)self.layer;
shapeLayer.lineWidth = 5;
shapeLayer.lineJoin = kCALineJoinRound;
shapeLayer.lineCap = kCALineCapRound;
shapeLayer.strokeColor = [UIColor yellowColor].CGColor;
shapeLayer.fillColor = [UIColor blueColor].CGColor;
}
return self;
}
+ (Class)layerClass {
return [CAShapeLayer class];
}
- (void) customDrawShape {
CAShapeLayer *shapeLayer = (CAShapeLayer *)self.layer;
[_bezierPath removeAllPoints];
[_bezierPath moveToPoint:CGPointMake(10, 10)];
[_bezierPath addQuadCurveToPoint:CGPointMake(2, 2) controlPoint:CGPointMake(50, 50)];
shapeLayer.path = _bezierPath.CGPath;
}
- (void)touchesBegan:(NSSet*)touches withEvent:(UIEvent*)event {
[super touchesBegan:touches withEvent:event];
[self customDrawShape];
}

Is there a best way to size an NSImage to a maximum filesize?

Here's what I've got so far:
NSBitmapImageRep *imageRep = [NSBitmapImageRep imageRepWithData:
[file.image TIFFRepresentation]];
// Resize image to 200x200
CGFloat maxSize = 200.0;
NSSize imageSize = imageRep.size;
if (imageSize.height>maxSize || imageSize.width>maxSize) {
// Find the aspect ratio
CGFloat aspectRatio = imageSize.height/imageSize.width;
CGSize newImageSize;
if (aspectRatio > 1.0) {
newImageSize = CGSizeMake(maxSize / aspectRatio, maxSize);
} else if (aspectRatio < 1.0) {
newImageSize = CGSizeMake(maxSize, maxSize * aspectRatio);
} else {
newImageSize = CGSizeMake(maxSize, maxSize);
}
[imageRep setSize:NSSizeFromCGSize(newImageSize)];
}
NSData *imageData = [imageRep representationUsingType:NSPNGFileType properties:nil];
NSString *outputFilePath = [#"~/Desktop/output.png" stringByExpandingTildeInPath];
[imageData writeToFile:outputFilePath atomically:NO];
The code assumes that a 200x200 PNG will be less than 128K, which is my size limit. 200x200 is big enough, but I'd prefer to max out the size if at all possible.
Here are my two problems:
The code doesn't work. I check the size of the exported file and it's the same size as the original.
Is there a way to predict the size of the output file before I export, so I can max out the dimensions but still get an image that's less than 128K?
Here's the working code. It's pretty sloppy and could probably use some optimizations, but at this point it runs fast enough that I don't care. It iterates over 100x for most pictures, and it's over in milliseconds. Also, this method is declared in a category on NSImage.
- (NSData *)resizeImageWithBitSize:(NSInteger)size
andImageType:(NSBitmapImageFileType)fileType {
CGFloat maxSize = 500.0;
NSSize originalImageSize = self.size;
NSSize newImageSize;
NSData *returnImageData;
NSInteger imageIsTooBig = 1000;
while (imageIsTooBig > 0) {
if (originalImageSize.height>maxSize || originalImageSize.width>maxSize) {
// Find the aspect ratio
CGFloat aspectRatio = originalImageSize.height/originalImageSize.width;
if (aspectRatio > 1.0) {
newImageSize = NSMakeSize(maxSize / aspectRatio, maxSize);
} else if (aspectRatio < 1.0) {
newImageSize = NSMakeSize(maxSize, maxSize * aspectRatio);
} else {
newImageSize = NSMakeSize(maxSize, maxSize);
}
} else {
newImageSize = originalImageSize;
}
NSImage *resizedImage = [[NSImage alloc] initWithSize:newImageSize];
[resizedImage lockFocus];
[self drawInRect:NSMakeRect(0, 0, newImageSize.width, newImageSize.height)
fromRect: NSMakeRect(0, 0, originalImageSize.width, originalImageSize.height)
operation: NSCompositeSourceOver
fraction: 1.0];
[resizedImage unlockFocus];
NSData *tiffData = [resizedImage TIFFRepresentation];
[resizedImage release];
NSBitmapImageRep *imageRep = [[NSBitmapImageRep alloc] initWithData:tiffData];
NSDictionary *imagePropDict = [NSDictionary
dictionaryWithObject:[NSNumber numberWithFloat:0.85]
forKey:NSImageCompressionFactor];
returnImageData = [imageRep representationUsingType:fileType properties:imagePropDict];
[imageRep release];
if ([returnImageData length] > size) {
maxSize = maxSize * 0.99;
imageIsTooBig--;
} else {
imageIsTooBig = 0;
}
}
return returnImageData;
}
For 1.
As another poster mentioned, setSize only alters display sizes of the image, and not the actual pixel data of the underlying image file.
To resize, you may want to redraw the source image onto another NSImageRep and then write that to file.
This blog post contains some sample code on how to do the resize in this manner.
How to Resize an NSImage
For 2.
I don't think you can without at least creating the object in memory and checking the length of the image. The actual bytes used will be dependent on the image type. Bitmaps with ARGB will be easy to predict their size, but PNG and JPEG would be much harder.
[imageData length] should give you the length of the NSData's contents, which I understand to be the final file size when written to disk. That should give you a chance to maximize the size of the file before actually writing it.
As to why the image is not shrinking or growing, according to the docs for setSize:
The size of an image representation combined with the physical dimensions of the image data determine the resolution of the image.
So it may be that by setting the size and not altering the resolution you're not modifying any pixels, just the way in which the pixels should be interpreted.

Resources