How to draw text in OpenGL on Mac OS with Retina display - macos

I'm using OpenGL to draw in Mac OS. When my code runs on Retina display everything works fine except text drawing.
Under Retinal display the text is twice as big as it should be. It happens because the font size is in points and each point is 2 pixels under Retina, but OpenGL is pixel based.
Here is the correct text drawing under standard display:
Here is the incorrect text drawing under Retina display:
Here is how I normaly draw strings. Since OpenGL does not have text drawing functions, in order to draw text I do the following:
Get the font:
NSFontManager fontManager = [NSFontManager sharedFontManager];
NSString font_name = [NSString stringWithCString: "Helvetica" encoding: NSMacOSRomanStringEncoding];
font = [fontManager fontWithFamily: font_name traits:fontStyle weight:5 size:9];
attribs = [[NSMutableDictionary dictionaryWithCapacity: 3] retain];
[attribs setObject:font forKey:NSFontAttributeName];
Create and measure the string:
NSString* aString = [NSString stringWithCString: "blah blah" encoding: NSMacOSRomanStringEncoding];
NSSize frameSize = [aString sizeWithAttributes: m_attribs];
Allocate NSImage with the size:
NSImage* image = [[NSImage alloc] initWithSize:frameSize];
[image lockFocus];
Draw the string into the image:
[aString drawAtPoint:NSMakePoint (0, 0) withAttributes:m_attribs];
Get the bits:
NSBitmapImageRep* bitmap = [[NSBitmapImageRep alloc] initWithFocusedViewRect:NSMakeRect (0.0f, 0.0f, frameSize.width, frameSize.height)];
[image unlockFocus];
Create OpenGL texture:
GLuint texture = 0;
glGenTextures(1, &texture);
glTexImage2D(GL_TEXTURE_RECTANGLE_EXT, 0, GL_RGBA, GLsizei(frameSize.width), GLsizei(frameSize.height), 0, GL_RGBA, [bitmap bitmapData]);
Draw the texture:
glBindTexture ….
other OpenGL drawing code
My question is how to get NSString to draw in pixel resolution not in points.
I tried the following:
Draw at half the point size: 4.5 instead of 9. This gives me the correct size but the text is drawn blurry.
Draw at point size and shrink the texture to half the size in OpenGL, again this does not give good looking results:

OpenGLs coordinate system is in fact point based not pixel based, but you are the one who decides what those points are. A context defines the coordinate system in 2D by the glOrtho function (or you could construct an ortho matrix by hand) which sets up the min and max x,y coordinates on the screen. For example an orthographic projection could be setup so that 0 was on the left of the screen and 100 was on the right regardless of the size of the framebuffers you are rendering into.
The font texture appears to be created just fine. The problem is that you are rendering it on geometry twice as large as it needs to be. In OpenGL, texture size does not effect the size of the object rendered on your screen. The size on screen is defined by the geometry passed to glDrawArrays or glBegin etc not the texture.
I think the problem is that you are using the pixel size of the font texture to define the quad size used to render on screen. This would put your problem in the "other OpenGL Drawing code" section. To fix that you could apply some sort of scale factor to the drawing. In retina mode the scale factor to 0.5 and for normal screens it would be 1.0 (UIKit uses a similar idea to render UIView content)
The quad calculation could look something like this:
quad.width = texture.width * scaleFactor
quad.height = texture.height * scaleFactor
Another option would be to separate the quad rendering size completely from the texture. If you had a function or a class for drawing text it could have a font size parameter which it would use as the actual quad size instead of using the texture size.

For my retina display, I had the problem of my framebuffer not fitting the actual window (the full buffer is rendered to a quarter of the window). In that case, using a doubled viewport solves the problem.
# Instead of actual window size width*height,
# double the dimensions for retina display
glViewport(0, 0, width*2, height*2)
In your case (this part is only an assumption, since I cannot run your code), changing the frame sizes that you are passing to gl for texture creation can do the trick.
GLsizei(frameSize.width*2), GLsizei(frameSize.height*2)

Given that you have an NSBitmapImageRep and get pixel raster data from that, you should be using bitmap.pixelsWide and bitmap.pixelsHigh, not frameSize when creating the texture from the bitmap.

Related

Cocoa NSPoint to Quartz NSPoint - Flip Y cordinate

In macOS programming, We know that
Quartz uses a coordinate space where the origin (0, 0) is at the top-left of the primary display. Increasing y goes down.
Cocoa uses a coordinate space where the origin (0, 0) is the bottom-left of the primary display and increasing y goes up.
Now am using a Quartz API - CGImageCreateWithImageInRect to crop an image , which takes a rectangle as a param. The rect has the Y origin coming from Cocoa's mousedown events.
Thus i get crops at inverted locations...
I tried this code to flip my Y co-ordinate in my cropRect
//Get the point in MouseDragged event
NSPoint currentPoint = [self.view convertPoint:[theEvent locationInWindow] fromView:nil];
CGRect nsRect = CGRectMake(currentPoint.x , currentPoint.y,
circleSizeW, circleSizeH);
//Now Flip the Y please!
CGFloat flippedY = self.imageView.frame.size.height - NSMaxY(nsRectFlippedY);
CGRect cropRect = CGRectMake(currentPoint.x, flippedY, circleSizeW, circleSizeH);
But for the areas on the top, i wrong FlippedY coordinates.
If i click near top edge of the view, i get flippedY = 510 to 515
At the top edge it should be between 0 to 10 :-|
Can someone point me to the correct and reliable way to Flip
the Y coordinate in such circumstances? Thank you!
Here is sample project in GitHub highlighting the issue
https://github.com/kamleshgk/SampleMacOSApp
As Charles mentioned, the Core Graphics API you are using requires coordinates relative to the image (not the screen). The important thing is to convert the event location from window coordinates to the view which most closely corresponds to the image's location and then flip it relative to that same view's bounds (not frame). So:
NSView *relevantView = /* only you know which view */;
NSPoint currentPoint = [relevantView convertPoint:[theEvent locationInWindow] fromView:nil];
// currentPoint is in Cocoa's y-up coordinate system, relative to relevantView, which hopefully corresponds to your image's location
currentPoint.y = NSMaxY(relevantView.bounds) - currentPoint.y;
// currentPoint is now flipped to be in Quartz's y-down coordinate system, still relative to relevantView/your image
The rect you pass to CGImageCreateWithImageInRect should be in coordinates relative to the input image's size, not screen coordinates. Assuming the size of the input image matches the size of the view to which you've converted your point, you should be able to achieve this by subtracting the rect's corner from the image's height, rather than the screen height.

How can CALayer image edges be prevented from stretching during resize?

I am setting the .contents of a CALayer to a CGImage, derived from drawing into an NSBitMapImageRep.
As far as I understand from the docs and WWDC videos, setting the layer's .contentsCenter to an NSRect like {{0.5, 0.5}, {0, 0}}, in combination with a .contentsGravity of kCAGravityResize should lead to Core Animation resizing the layer by stretching the middle pixel, the top and bottom horizontally, and the sides vertically.
This very nearly works, but not quite. The layer resizes more-or-less correctly, but if I draw lines at the edge of the bitmap, as I resize the window the lines can be seen to fluctuate in thickness very slightly. It's subtle enough to be barely a problem until the resizing gets down to around 1/4 of the original layer's size, below which point the lines can thin and disappear altogether. If I draw the bitmaps multiple times at different sizes, small differences in line thickness are very apparent.
I originally canvassed a pixel-alignment issue, but it can't be that because the thickness of the stationary LH edge (for example) will fluctuate as I resize the RH edge. It happens on 1x and 2x screens.
Here's some test code. It's the updateLayer method from a layer-backed NSView subclass (I'm using the alternative non-DrawRect draw path):
- (void)updateLayer {
id image = [self imageForCurrentScaleFactor]; // CGImage
self.layer.contents = image;
// self.backingScaleFactor is set from the window's backingScaleFactor
self.layer.contentsScale = self.backingScaleFactor;
self.layer.contentsCenter = NSMakeRect(0.5, 0.5, 0, 0);
self.layer.contentsGravity = kCAGravityResize;
}
And here's some test drawing code (creating the image supplied by imageForCurrentScaleFactor above):
CGFloat width = rect.size.width;
CGFloat height = rect.size.height;
NSBitmapImageRep *imageRep = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes: NULL
pixelsWide: width * scaleFactor
pixelsHigh: height * scaleFactor
bitsPerSample: 8
samplesPerPixel: 4
hasAlpha: YES
isPlanar: NO
colorSpaceName: NSCalibratedRGBColorSpace
bytesPerRow: 0
bitsPerPixel: 0];
[imageRep setSize:rect.size];
[NSGraphicsContext saveGraphicsState];
NSGraphicsContext *ctx = [NSGraphicsContext graphicsContextWithBitmapImageRep:imageRep];
[NSGraphicsContext setCurrentContext:ctx];
[[NSColor whiteColor] setFill];
[NSBezierPath fillRect:rect];
[[NSColor blackColor] setStroke];
[NSBezierPath setDefaultLineWidth:1.0f];
[NSBezierPath strokeRect:insetRect];
[NSGraphicsContext restoreGraphicsState];
// image for CALayer.contents is now [imageRep CGImage]
The solution (if you're talking about the problem I think you're talking about) is to have a margin of transparent pixels forming the outside edges of the image. One pixel thick, all the way around, will do it. The reason is that the problem (if it's the problem I think it is) arises only with visible pixels that touch the outside edge of the image. Therefore the idea is to have no visible pixels touch the outside edge of the image.
I have found a practical answer, but would be interested in comments filling in detail from anyone who knows how this works.
The problem did prove to be to do with how the CALayer was being stretched. I was drawing into a bitmap of arbitrary size, on the basis that (as the CALayer docs suggest) use of a .contentsCenter with zero width and height would in effect do a nine-part-image stretch, selecting the single centre pixel as the central stretching portion. With this bitmap as a layer's .contents, I could then resize the CALayer to any desired size (down or up).
Turns out that the 'artibrary size' was the problem. Something odd happens in the way CALayer stretches the edge portions (at least when resizing down). By instead making the initial frame for drawing tiny (ie. just big enough to fit my outline drawing plus a couple of pixels for the central stretching portion), nothing spurious makes its way into the edges during stretching.
The bitmap stretches properly if created with rect just big enough to fit the contents and stretchable center pixel, ie.:
NSRect rect = NSMakeRect(0, 0, lineWidth * 2 + 2, lineWidth * 2 + 2);
This tiny image stretches to any larger size perfectly.

Making a CoreImage filter than sums all row pixels, similar to CIRowAverage?

I'm currently running code on the CPU that sums columns and rows of an grey scale NSImage (i.e. only 1 samplePerPixel). I thought I would try to move the code to the GPU (if possible). I found a CoreImage filters CIRowAverage and CIColumnAverage which seem similar.
In the Apple Docs on writing custom Core Image filters they state,
Keep in mind that your code can’t accumulate knowledge from pixel to pixel. A good strategy when writing your code is to move as much invariant calculation as possible from the actual kernel and place it in the Objective-C portion of the filter.
This hints that maybe one cannot make a summation of pixels using a filter kernel. If so, how do the above function manage to get an average of a region?
So my question is, what the best way to implemented summing row or columns of a image to get the total value of the pixels. Should I stick to the CPU?
The Core Image filters perform this averaging through a series of reductions. A former engineer on the team describes how this was done for the CIAreaAverage filter within this GPU Gems chapter (under section 26.2.2 "Finding the Centroid").
I talk about a similar averaging by reduction in my answer here. I needed this capability on iOS, so I wrote a fragment shader that reduced the image by a factor of four in both horizontal and vertical dimensions, sampling between pixels in order to average sixteen pixels into one at each step. Once the image was reduced to a small enough size, the remaining pixels were read out and averaged to produce a single final value.
This kind of reduction is still very fast to perform on the GPU, and I was able to extract an average color from a 640x480 video frame in ~6 ms on an iPhone 4. You'll of course have a lot more horsepower to play with on a Mac.
You could take a similar approach to this by reducing in only one direction or the other at each step. If you are interested in obtaining a sum of the pixel values, you'll need to watch out for precision limits in the pixel formats used on the GPU. By default, RGBA color values are stored as 8-bit values, but OpenGL (ES) extensions on certain GPUs can give you the ability to render into 16-bit or even 32-bit floating point textures, which extends your dynamic range. I'm not sure, but I believe that Core Image lets you use 32-bit float components on the Mac.
FYI on the CIAreaAverage filter—it's coded like this:
CGRect inputExtent = [self.inputImage extent];
CIVector *extent = [CIVector vectorWithX:inputExtent.origin.x
Y:inputExtent.origin.y
Z:inputExtent.size.width
W:inputExtent.size.height];
CIImage* inputAverage = [CIFilter filterWithName:#"CIAreaAverage" keysAndValues:#"inputImage", self.inputImage, #"inputExtent", extent, nil].outputImage;
//CIImage* inputAverage = [self.inputImage imageByApplyingFilter:#"CIAreaMinimum" withInputParameters:#{#"inputImage" : inputImage, #"inputExtent" : extent}];
EAGLContext *myEAGLContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
NSDictionary *options = #{ kCIContextWorkingColorSpace : [NSNull null] };
CIContext *myContext = [CIContext contextWithEAGLContext:myEAGLContext options:options];
size_t rowBytes = 32 ; // ARGB has 4 components
uint8_t byteBuffer[rowBytes]; // Buffer to render into
[myContext render:inputAverage toBitmap:byteBuffer rowBytes:rowBytes bounds:[inputAverage extent] format:kCIFormatRGBA8 colorSpace:nil];
const uint8_t* pixel = &byteBuffer[0];
float red = pixel[0] / 255.0;
float green = pixel[1] / 255.0;
float blue = pixel[2] / 255.0;
NSLog(#"%f, %f, %f\n", red, green, blue);
Your output should look something like this:
2015-05-23 15:58:20.935 CIFunHouse[2400:489913] 0.752941, 0.858824, 0.890196
2015-05-23 15:58:20.981 CIFunHouse[2400:489913] 0.752941, 0.858824, 0.890196

PNG to NSImage and back causes jaggies near transparency

I'm resizing some PNG files from within a Cocoa app. The files are eventually loaded as OpenGL textures by another app, and a poorly-written shader is applied, which at one point, does the following:
texColor = mix(constant,vec4(texColor.rgb/texColor.a,texColor.a),texColor.a);
Dividing by alpha is a bad idea, and the solution is to ensure that the RGB components of texColor in that step never go above 1. However! For curiosity's sake:
The original PNGs (created in GIMP), surprisingly work fine, and resized versions created with GIMP work fine as well. However, resizing the files using the code below causes the textures to have jaggies near any transparent pixels, even if percent is 1.0. Any idea what it is that I'm unwittingly changing about these images that suddenly causes the shader's bug to present itself?
NSImage* originalImage = [[NSImage alloc] initWithData:[currentFile regularFileContents]];
NSSize newSize = NSMakeSize([originalImage size].width * percent, [originalImage size].height * percent);
NSImage* resizedImage = [[NSImage alloc] initWithSize:newSize];
[resizedImage lockFocus];
[originalImage drawInRect:NSMakeRect(0,0,newSize.width,newSize.height)
fromRect:NSMakeRect(0,0,[originalImage size].width, [originalImage size].height)
operation:NSCompositeCopy fraction:1.0];
[resizedImage unlockFocus];
NSBitmapImageRep* bits = [[[NSBitmapImageRep alloc] initWithCGImage:[resizedImage CGImageForProposedRect:nil context:nil hints:nil]] autorelease];
NSData* data = [bits representationUsingType:NSPNGFileType properties:nil];
NSFileWrapper* newFile = [[[NSFileWrapper alloc] initRegularFileWithContents:data] autorelease];
[newFile setPreferredFilename:currentFilename];
[folder removeFileWrapper:currentFile];
[folder addFileWrapper:newFile];
[originalImage release];
[resizedImage release];
I typically set image interpolation to high when doing these kinds of resizing operations. This may be your issue.
[resizedImage lockFocus];
[NSGraphicsContext saveGraphicsState];
[[NSGraphicsContext currentContext] setImageInterpolation:NSImageInterpolationHigh];
[originalImage drawInRect:...]
[NSGraphicsContext restoreGraphicsState];
[resizedImage unlockFocus];
Another thing to make sure you're doing, though it may not help (see below):
[[NSGraphicsContext currentContext] setShouldAntialias:YES];
This may not fix it because you can't anti-alias without knowing the target background. But it still might help. If this is the problem (that you can't anti-alias this soon), you may have to composite this resizing at the point that you're ready to draw the final image.
What is the DPI of your source PNG? You are creating the second image by assuming that the original image's size is in pixels, but size is in points.
Suppose you have an image that is 450 pixels by 100 pixels, with DPI of 300. That image is, in real world units, 1 1/2 inches x 1/3 inches.
Now, points in Cocoa are nominally 1/72 of an inch. The size of the image in points is 108 x 24.
If you then create a new image based on that size, there's no DPI specified, so the assumption is one pixel per point. You're creating a much smaller image, which means that fine features are going to have to be approximated more coarsely.
You will have better luck if you pick one of the image reps of the original image and use its pixelsWide and pixelsHigh values. When you do this, however, the new image will have a different real world size than the original. In my example, the original was 1 1/2 x 1/3 inches. The new image will have the same pixel dimensions (450 x 100) but at 72 dpi, so it will be 6.25 x 1.39 inches. To fix this, you'll need to set the size of the new bitmap rep in points to the size of the original in points.

Core Graphics stroke width is inconsistent between lines & arcs?

The use case: I am subclassing UIView to create a custom view that "mattes" a UIImage with a rounded rectangle (clips the image to a rounded rect). The code is working; I've used a method similar to this question.
However, I want to stroke the clipping path to create a "frame". This works, but the arc strokes look markedly different than the line strokes. I've tried adjusting the stroke widths to greater values (I thought it was pixelation at first), but the anti-aliasing seems to handle arcs and lines differently.
Here's what I see on the simulator:
This is the code that draws it:
CGContextSetRGBStrokeColor(context, 0, 0, 0, STROKE_OPACITY);
CGContextSetLineWidth(context, 2.0f);
CGContextAddPath(context, roundRectPath);
CGContextStrokePath(context);
Anyone know how to make these line up smoothly?
… but the anti-aliasing seems to handle arcs and lines differently.
No, it doesn't.
Your stroke width is consistent—it's 2 pt all the way around.
What's wrong is that you have clipped to a rectangle, and your shape's sides are right on top of the edges of this rectangle, so only the halves of the sides that are inside the rectangle are getting drawn. That's why the edges appear only 1 px wide.
The solution is either not to clip, to grow your clipping rectangle by 2 pt on each axis before clipping to it, or to move your shape's edges inward by 1 pt on each side. (ETA: Or, yeah, do an inner stroke.)
Just in case anyone is trying to do the same thing I am (round rect an image):
The UIImageView class has a property layer, of type CALayer . CALayer already has this functionality built-in (it WAS a little surprising to me I couldn't find it anywhere):
UIImageView *thumbnailView = [UIImage imageNamed:#"foo.png"];
thumbnailView.layer.masksToBounds = YES;
thumbnailView.layer.cornerRadius = 15.0f;
thumbnailView.layer.borderWidth = 2.0f;
[self.view addSubview:thumbnailView];
Also does the trick.

Resources