Related
I'm making an app on OS X Sierra using Metal. Something I am doing is causing the screen to start glitching badly, flashing black in various places, which quickly escalates to the entire screen going black.
In XCode, if I use the GPU frame capture, the paused frame appears correct however -- it suddenly returns from the black abyss. I don't see any errors or warnings in the GPU frame information. However, I am relatively new to Metal and am not experienced with the frame debugger, so I may not know what to look for.
Usually there is nothing printed to the console, but occasionally I do get one of these:
Execution of the command buffer was aborted due to an error during execution. Internal Error (IOAF code 1)
The same app runs on iOS devices without this problem -- so far it only happens on OS X. Does this sound familiar? Any suggestions on what I should check?
I can post some code if it will be helpful, but right now I'm not sure what part of the program is the problem.
EDIT: In response to Noah Witherspoon -- it seems that the problem is caused by some kind of interaction between my scene drawing and the UI drawing. If I display only my scene, which is composed of fat, fuzzy lines, then the problem does not occur. It also does not occur if I display only the UI, which is orthographic projection, a bunch of rounded rectangles and some type. The problem happens only when both are showing. This is a lot of code, many buffers and a lot of commandBuffer usage, too much to put into a post. But here is a little bit.
My lines are rendered with vertex buffers which are arrays of floats, four per vertex:
let dataSize = count * 4 * MemoryLayout<Float>.size
vertexBuffer = device.makeBuffer(bytes: points, length: dataSize, options: MTLResourceOptions())!
These are rendered like this:
renderEncoder.setVertexBuffer(self.vertexBuffer, offset: 0, index: 0)
renderEncoder.setRenderPipelineState(strokeNode.strokePipelineState)
renderEncoder.drawPrimitives(type: .triangleStrip, vertexStart: 0, vertexCount: Int(widthCountEdge.count)*2-4)
renderEncoder.setRenderPipelineState(strokeNode.capPipelineState)
renderEncoder.drawPrimitives(type: .triangle, vertexStart: 0, vertexCount: 12)
Here's my main loop for drawing with the command Buffer.
if let commandBuffer = commandQueue.makeCommandBuffer() {
commandBuffer.addCompletedHandler { (commandBuffer) -> Void in
self.strokeNode.bufferProvider.availableResourcesSemaphore.signal()
}
self.updateDynamicBufferState()
self.updateGameState(timeInterval)
let renderPassDescriptor = view.currentRenderPassDescriptor
renderPassDescriptor?.colorAttachments[0].loadAction = .clear
renderPassDescriptor?.colorAttachments[0].clearColor = MTLClearColor(red: 0.0, green: 0.0, blue: 0.0, alpha: 0.0)
renderPassDescriptor?.colorAttachments[0].storeAction = .store
if let renderPassDescriptor = renderPassDescriptor, let renderEncoder = commandBuffer.makeRenderCommandEncoder(descriptor: renderPassDescriptor) {
strokeNode.subrender(renderEncoder, parentModelViewMatrix: viewMatrix, projectionMatrix: projectionMatrix, renderer:self)
mainscreen.draw(renderEncoder);
renderEncoder.endEncoding()
if let drawable = view.currentDrawable {
commandBuffer.present(drawable)
}
}
commandBuffer.commit()
}
The line drawing happens in strokeNode.subrender, and then my UI drawing happens in mainscreen.draw. The UI drawing has a lot of components - a lot to list here -- but I will try taking them out one by one and see if I can narrow it down. If none of this looks problematic I'll edit and post some of that ...
Thanks!
I created an app in which I want to display text on top of google maps. I chose to use custom markers, but they can only be images, so I decided to create an image from my text utilizing SkiaSharp.
private static ImageSource CreateImageSource(string text)
{
int numberSize = 20;
int margin = 5;
SKBitmap bitmap = new SKBitmap(30, numberSize + margin * 2, SKImageInfo.PlatformColorType, SKAlphaType.Premul);
SKCanvas canvas = new SKCanvas(bitmap);
SKPaint paint = new SKPaint
{
Style = SKPaintStyle.StrokeAndFill,
TextSize = numberSize,
Color = SKColors.Red,
StrokeWidth = 1,
};
canvas.DrawText(text.ToString(), 0, numberSize, paint);
SKImage skImage = SKImage.FromBitmap(bitmap);
SKData data = skImage.Encode(SKEncodedImageFormat.Png, 100);
return ImageSource.FromStream(data.AsStream);
}
The images I create however have ugly artifacts on the top of the resulting image and my feeling is that they get worse if I create multiple images.
I built an example app, that shows the artifacts and the code I used to draw the text. It can be found here:
https://github.com/hot33331/SkiaSharpExample
How can I get rid of those artifacts. Am I using skia wrong?
I got the following answer from Matthew Leibowitz on the SkiaSharp GitHub:
The chances are you are not clearing the canvas/bitmap first.
You can either do bitmap.Erase(SKColors.Transparent) or canvas.Clear(SKColors.Transparent) (you can use any color).
The reason for this is performance. When creating a new bitmap, the computer has no way of knowing what background color you want. So, if it was to go transparent and you wanted white, then there would be two draw operations to clear the pixels (and this may be very expensive for large images).
During the allocation of the bitmap, the memory is provided, but the actual data is untouched. If there was anything there previously (which there will be), this data appears as colored pixels.
When I've seen that before, it's been because the memory passed to SkiaSharp was not zeroed. As an optimization, though, Skia assumes that the memory block passed to it is pre zeroed. Resultingly, if your first operation is a clear, it will ignore that operation, because it thinks that the state is already clean. To resolve this issue, you can manually zero the memory passed to SkiaSharp.
public static SKSurface CreateSurface(int width, int height)
{
// create a block of unmanaged native memory for use as the Skia bitmap buffer.
// unfortunately, this may not be zeroed in some circumstances.
IntPtr buff = System.Runtime.InteropServices.Marshal.AllocCoTaskMem(width * height * 4);
byte[] empty = new byte[width * height * 4];
// copy in zeroed memory.
// maybe there's a more sanctioned way to do this.
System.Runtime.InteropServices.Marshal.Copy(empty, 0, buff, width * height * 4);
// create the actual SkiaSharp surface.
var colorSpace = CGColorSpace.CreateDeviceRGB();
var bContext = new CGBitmapContext(buff, width, height, 8, width * 4, colorSpace, (CGImageAlphaInfo)bitmapInfo);
var surface = SKSurface.Create(width, height, SKColorType.Rgba8888, SKAlphaType.Premul, bitmap.Data, width * 4);
return surface;
}
Edit: btw, I assume this is a bug in SkiaSharp. The samples/apis that create the buffer for you should probably be zeroing it out. Depending on the platform it can be hard to repro as the memory alloc behaves differently. More or less likely to provide you untouched memory.
I have an extremely basic polygon that is the texture for a sprite in my game, yet when I try and create a physicsBody from this texture for the sprite I get this error:
2016-06-19 08:25:21.707 Space Escape[14677:5651144] PhysicsBody: Could not create physics body.
Also, the game uses many different simple polygons and for some the physicsBody can be created, yet for others it gets an error.
func setPhysics(size: CGSize) {
self.physicsBody = SKPhysicsBody(texture: asteroidTexture, size: size)
self.physicsBody?.angularDamping = 0
self.physicsBody?.angularVelocity = 2
}
Here is the texture:
In my playground it is working. Try replacing the size parameter as in the code below and let me know
let asteroidTexture = SKTexture(imageNamed: "sprite")
let physicsBody = SKPhysicsBody(texture: asteroidTexture, size: asteroidTexture.size())
I've experimented that this kind of physical representation could be decelerate your collisions. Instead of it, if you don't require extreme precision to the physical bodies of your sprites try to use:
self.physicsBody = SKPhysicsBody(circleOfRadius: size.width/2)
It is much lighter and slimmer for the cpu calculations.
You can see the big difference when your game is almost completed (for example 80%). I hope this helps.
According to the docs, CTFramesetterSuggestFrameSizeWithConstraints () "determines the frame size needed for a string range".
Unfortunately the size returned by this function is never accurate. Here is what I am doing:
NSAttributedString *string = [[[NSAttributedString alloc] initWithString:#"lorem ipsum" attributes:nil] autorelease];
CTFramesetterRef framesetter = CTFramesetterCreateWithAttributedString((CFAttributedStringRef) string);
CGSize textSize = CTFramesetterSuggestFrameSizeWithConstraints(framesetter, CFRangeMake(0,0), NULL, CGSizeMake(rect.size.width, CGFLOAT_MAX), NULL);
The returned size always has the correct width calculated, however the height is always slightly shorter than what is expected.
Is this the correct way to use this method?
Is there any other way to layout Core Text?
Seems I am not the only one to run into problems with this method. See https://devforums.apple.com/message/181450.
Edit:
I measured the same string with Quartz using sizeWithFont:, supplying the same font to both the attributed string, and to Quartz. Here are the measurements I received:
Core Text: 133.569336 x 16.592285
Quartz: 135.000000 x 31.000000
try this.. seem to work:
+(CGFloat)heightForAttributedString:(NSAttributedString *)attrString forWidth:(CGFloat)inWidth
{
CGFloat H = 0;
// Create the framesetter with the attributed string.
CTFramesetterRef framesetter = CTFramesetterCreateWithAttributedString( (CFMutableAttributedStringRef) attrString);
CGRect box = CGRectMake(0,0, inWidth, CGFLOAT_MAX);
CFIndex startIndex = 0;
CGMutablePathRef path = CGPathCreateMutable();
CGPathAddRect(path, NULL, box);
// Create a frame for this column and draw it.
CTFrameRef frame = CTFramesetterCreateFrame(framesetter, CFRangeMake(startIndex, 0), path, NULL);
// Start the next frame at the first character not visible in this frame.
//CFRange frameRange = CTFrameGetVisibleStringRange(frame);
//startIndex += frameRange.length;
CFArrayRef lineArray = CTFrameGetLines(frame);
CFIndex j = 0, lineCount = CFArrayGetCount(lineArray);
CGFloat h, ascent, descent, leading;
for (j=0; j < lineCount; j++)
{
CTLineRef currentLine = (CTLineRef)CFArrayGetValueAtIndex(lineArray, j);
CTLineGetTypographicBounds(currentLine, &ascent, &descent, &leading);
h = ascent + descent + leading;
NSLog(#"%f", h);
H+=h;
}
CFRelease(frame);
CFRelease(path);
CFRelease(framesetter);
return H;
}
For a single line frame, try this:
line = CTLineCreateWithAttributedString((CFAttributedStringRef) string);
CGFloat ascent;
CGFloat descent;
CGFloat width = CTLineGetTypographicBounds(line, &ascent, &descent, NULL);
CGFloat height = ascent+descent;
CGSize textSize = CGSizeMake(width,height);
For multiline frames, you also need to add the line's lead (see a sample code in Core Text Programming Guide)
For some reason, CTFramesetterSuggestFrameSizeWithConstraints() is using the difference in ascent and descent to calculate the height:
CGFloat wrongHeight = ascent-descent;
CGSize textSize = CGSizeMake(width, wrongHeight);
It could be a bug?
I'm having some other problems with the width of the frame; It's worth checking out as it only shows in special cases. See this question for more.
The problem is that you have to apply a paragraph style to the text before you measure it. If you don't then you get the default leading of 0.0. I provided a code sample for how to do this in my answer to a duplicate of this question here https://stackoverflow.com/a/10019378/1313863.
ing.conti's answer but in Swift 4:
var H:CGFloat = 0
// Create the framesetter with the attributed string.
let framesetter = CTFramesetterCreateWithAttributedString(attributedString as! CFMutableAttributedString)
let box:CGRect = CGRect.init(x: 0, y: 0, width: width, height: CGFloat.greatestFiniteMagnitude)
let startIndex:CFIndex = 0
let path:CGMutablePath = CGMutablePath()
path.addRect(box)
// Create a frame for this column and draw it.
let frame:CTFrame = CTFramesetterCreateFrame(framesetter, CFRangeMake(startIndex, 0), path, nil)
// Start the next frame at the first character not visible in this frame.
//CFRange frameRange = CTFrameGetVisibleStringRange(frame);
//startIndex += frameRange.length;
let lineArray:CFArray = CTFrameGetLines(frame)
let lineCount:CFIndex = CFArrayGetCount(lineArray)
var h:CGFloat = 0
var ascent:CGFloat = 0
var descent:CGFloat = 0
var leading:CGFloat = 0
for j in 0..<lineCount {
let currentLine = unsafeBitCast(CFArrayGetValueAtIndex(lineArray, j), to: CTLine.self)
CTLineGetTypographicBounds(currentLine, &ascent, &descent, &leading)
h = ascent + descent + leading;
H+=h;
}
return H;
I did try and keep it as 1:1 with the Objective C code but Swift is not as nice when handling pointers so some changes were required for casting.
I also did some benchmarks comparing this code (and it's ObjC counterpart) to another height methods. As a heads up, I used a HUGE and very complex attributed string as input and also did it on the sim so the times themselves are meaningless however the relative speeds are correct.
Runtime for 1000 iterations (ms) BoundsForRect: 8909.763097763062
Runtime for 1000 iterations (ms) layoutManager: 7727.7010679244995
Runtime for 1000 iterations (ms) CTFramesetterSuggestFrameSizeWithConstraints: 1968.9229726791382
Runtime for 1000 iterations (ms) CTFramesetterCreateFrame ObjC: 1941.6030206680298
Runtime for 1000 iterations (ms) CTFramesetterCreateFrame-Swift: 1912.694974899292
It might seem strange but I found that if you use ceil function first and then add +1 to the height it will always work. Many third party APIs use this trick.
Resurrecting.
When initially determining where lines should be placed within a frame, Core Text seems to massage the ascent+descent for the purposes of line origin calculation. In particular, it seems like 0.2*(ascent+descent) is added to the ascent, and then both the descent and resultant ascent are modified by floor(x + 0.5), and then the baseline positions are calculated based on these adjusted ascents and descents. Both of these steps are affected by certain conditions whose nature I am not sure, and I also already forgot at which point paragraph styles are taken into account, despite only looking into it a few days ago.
I've already resigned to just considering a line to start at its baseline and not trying to figure out what the actual lines land at. Unfortunately, this still does not seem to be enough: paragraph styles are not reflected in CTLineGetTypographicBounds(), and some fonts like Klee that have nonzero leadings wind up crossing the path rect! Not sure what to do about this... probably for another question.
UPDATE
It seems CTLineGetBoundsWithOptions(line, 0) does get the proper line bounds, but not quite fully: there's a gap between lines, and with some fonts (Klee again) the gap is negative and the lines overlap... Not sure what to do about this. :| At least we're slightly closer??
And even then it still does not take paragraph styles into consideration >:|
CTLineGetBoundsWithOptions() is not listed on Apple's documentation site, possibly due to a bug in the current version of their documentation generator. It is a fully documented API, however — you'll find it in the header files and it was discussed at length at WWDC 2012 session 226.
None of the options are relevant to us: they reduce the bounds rect by taking certain font design choices into consideration (or increase the bounds rect randomly, in the case of the new kCTLineBoundsIncludeLanguageExtents). One useful option in general, though, is kCTLineBoundsUseGlyphPathBounds, which is equivalent to CTLineGetImageBounds() but without needing to specify a CGContext (and thus without being subject to an existing text matrix or CTM).
After weeks of trying everything, any combination possible, I made a break through and found something that works. This issue seems to be more prominent on macOS than on iOS, but still appears on both.
What worked for me was to use a CATextLayer instead of a NSTextField (on macOS) or a UILabel (on iOS).
And using boundingRect(with:options:context:) instead of CTFramesetterSuggestFrameSizeWithConstraints. Even though in theory the latter should be more lower level than the former, and I was assuming would be more precise, the game changer turns out to be NSString.DrawingOptions.usesDeviceMetrics.
The frame size suggested fits like a charm.
Example:
let attributedString = NSAttributedString(string: "my string")
let maxWidth = CGFloat(300)
let size = attributedString.boundingRect(
with: .init(width: maxWidth,
height: .greatestFiniteMagnitude),
options: [
.usesFontLeading,
.usesLineFragmentOrigin,
.usesDeviceMetrics])
let textLayer = CATextLayer()
textLayer.frame = .init(origin: .zero, size: size)
textLayer.contentsScale = 2 // for retina
textLayer.isWrapped = true // for multiple lines
textLayer.string = attributedString
Then you can add the CATextLayer to any NSView/UIView.
macOS
let view = NSView()
view.wantsLayer = true
view.layer?.addSublayer(textLayer)
iOS
let view = UIView()
view.layer.addSublayer(textLayer)
I want to do some drawing of NSAttributedStrings in fixed-width boxes, but am having trouble calculating the right height they'll take up when drawn. So far, I've tried:
Calling - (NSSize) size, but the results are useless (for this purpose), as they'll give whatever width the string desires.
Calling - (void)drawWithRect:(NSRect)rect options:(NSStringDrawingOptions)options with a rect shaped to the width I want and NSStringDrawingUsesLineFragmentOrigin in the options, exactly as I'm using in my drawing. The results are ... difficult to understand; certainly not what I'm looking for. (As is pointed out in a number of places, including this Cocoa-Dev thread).
Creating a temporary NSTextView and doing:
[[tmpView textStorage] setAttributedString:aString];
[tmpView setHorizontallyResizable:NO];
[tmpView sizeToFit];
When I query the frame of tmpView, the width is still as desired, and the height is often correct ... until I get to longer strings, when it's often half the size that's required. (There doesn't seem to be a max size being hit: one frame will be 273.0 high (about 300 too short), the other will be 478.0 (only 60-ish too short)).
I'd appreciate any pointers, if anyone else has managed this.
-[NSAttributedString boundingRectWithSize:options:]
You can specify NSStringDrawingUsesDeviceMetrics to get union of all glyph bounds.
Unlike -[NSAttributedString size], the returned NSRect represents the dimensions of the area that would change if the string is drawn.
As #Bryan comments, boundingRectWithSize:options: is deprecated (not recommended) in OS X 10.11 and later. This is because string styling is now dynamic depending on the context.
For OS X 10.11 and later, see Apple's Calculating Text Height developer documentation.
The answer is to use
- (void)drawWithRect:(NSRect)rect options:(NSStringDrawingOptions)options
but the rect you pass in should have 0.0 in the dimension you want to be unlimited (which, er, makes perfect sense). Example here.
I have a complex attributed string with multiple fonts and got incorrect results with a few of the above answers that I tried first. Using a UITextView gave me the correct height, but was too slow for my use case (sizing collection cells). I wrote swift code using the same general approach described in the Apple doc referenced previously and described by Erik. This gave me correct results with must faster execution than having a UITextView do the calculation.
private func heightForString(_ str : NSAttributedString, width : CGFloat) -> CGFloat {
let ts = NSTextStorage(attributedString: str)
let size = CGSize(width:width, height:CGFloat.greatestFiniteMagnitude)
let tc = NSTextContainer(size: size)
tc.lineFragmentPadding = 0.0
let lm = NSLayoutManager()
lm.addTextContainer(tc)
ts.addLayoutManager(lm)
lm.glyphRange(forBoundingRect: CGRect(origin: .zero, size: size), in: tc)
let rect = lm.usedRect(for: tc)
return rect.integral.size.height
}
You might be interested in Jerry Krinock's great (OS X only) NS(Attributed)String+Geometrics category, which is designed to do all sorts of string measurement, including what you're looking for.
On OS X 10.11+, the following method works for me (from Apple's Calculating Text Height document)
- (CGFloat)heightForString:(NSAttributedString *)myString atWidth:(float)myWidth
{
NSTextStorage *textStorage = [[NSTextStorage alloc] initWithAttributedString:myString];
NSTextContainer *textContainer = [[NSTextContainer alloc] initWithContainerSize:
NSMakeSize(myWidth, FLT_MAX)];
NSLayoutManager *layoutManager = [[NSLayoutManager alloc] init];
[layoutManager addTextContainer:textContainer];
[textStorage addLayoutManager:layoutManager];
[layoutManager glyphRangeForTextContainer:textContainer];
return [layoutManager
usedRectForTextContainer:textContainer].size.height;
}
Swift 4.2
let attributedString = self.textView.attributedText
let rect = attributedString?.boundingRect(with: CGSize(width: self.textView.frame.width, height: CGFloat.greatestFiniteMagnitude), options: [.usesLineFragmentOrigin, .usesFontLeading], context: nil)
print("attributedString Height = ",rect?.height)
Swift 3:
let attributedStringToMeasure = NSAttributedString(string: textView.text, attributes: [
NSFontAttributeName: UIFont(name: "GothamPro-Light", size: 15)!,
NSForegroundColorAttributeName: ClickUpConstants.defaultBlackColor
])
let placeholderTextView = UITextView(frame: CGRect(x: 0, y: 0, width: widthOfActualTextView, height: 10))
placeholderTextView.attributedText = attributedStringToMeasure
let size: CGSize = placeholderTextView.sizeThatFits(CGSize(width: widthOfActualTextView, height: CGFloat.greatestFiniteMagnitude))
height = size.height
This answer works great for me, unlike the other ones which were giving me incorrect heights for larger strings.
If you want to do this with regular text instead of attributed text, do the following:
let placeholderTextView = UITextView(frame: CGRect(x: 0, y: 0, width: ClickUpConstants.screenWidth - 30.0, height: 10))
placeholderTextView.text = "Some text"
let size: CGSize = placeholderTextView.sizeThatFits(CGSize(width: widthOfActualTextView, height: CGFloat.greatestFiniteMagnitude))
height = size.height
I just wasted a bunch of time on this, so I'm providing an additional answer to save others in the future. Graham's answer is 90% correct, but it's missing one key piece:
To obtain accurate results with -boundingRectWithSize:options: you MUST pass the following options:
NSStringDrawingUsesLineFragmentOrigin|NSStringDrawingUsesDeviceMetrics|NSStringDrawingUsesFontLeading
If you omit the lineFragmentOrigin one, you'll get nonsense back; the returned rect will be a single line high and won't at all respect the size you pass into the method.
Why this is so complicated and so poorly documented is beyond me. But there you have it. Pass those options and it'll work perfectly (on OS X at least).
Use NSAttributedString method
- (CGRect)boundingRectWithSize:(CGSize)size options:(NSStringDrawingOptions)options context:(NSStringDrawingContext *)context
The size is the constraint on the area, the calculated area width is restricted to the specified width whereas the height is flexible based on this width. One can specify nil for context if that's not available. To get multi-line text size, use NSStringDrawingUsesLineFragmentOrigin for options.
As lots of guys mentioned above, and base on my test.
I use open func boundingRect(with size: CGSize, options: NSStringDrawingOptions = [], context: NSStringDrawingContext?) -> CGRect on iOS like this bellow:
let rect = attributedTitle.boundingRect(with: CGSize(width:200, height:0), options: NSStringDrawingOptions.usesLineFragmentOrigin, context: nil)
Here the 200 is the fixed width as your expected, height I give it 0 since I think it's better to kind tell API height is unlimited.
Option is not so important here,I have try .usesLineFragmentOrigin or .usesLineFragmentOrigin.union(.usesFontLeading) or .usesLineFragmentOrigin.union(.usesFontLeading).union(.usesDeviceMetrics), it give same result.
And the result is expected as my though.
Thanks.
Not a single answer on this page worked for me, nor did that ancient old Objective-C code from Apple's documentation. What I finally did get to work for a UITextView is first setting its text or attributedText property on it and then calculating the size needed like this:
let size = textView.sizeThatFits(CGSize(width: maxWidth, height: CGFloat.max))
Works perfectly. Booyah!
I found helper class to find height and width of attributedText (Tested code)
https://gist.github.com/azimin/aa1a79aefa1cec031152fa63401d2292
Add above file in your project
How to use
let attribString = AZTextFrameAttributes(attributedString: lbl.attributedText!)
let width : CGFloat = attribString.calculatedTextWidth()
print("width is :: >> \(width)")