I'm resizing some PNG files from within a Cocoa app. The files are eventually loaded as OpenGL textures by another app, and a poorly-written shader is applied, which at one point, does the following:
texColor = mix(constant,vec4(texColor.rgb/texColor.a,texColor.a),texColor.a);
Dividing by alpha is a bad idea, and the solution is to ensure that the RGB components of texColor in that step never go above 1. However! For curiosity's sake:
The original PNGs (created in GIMP), surprisingly work fine, and resized versions created with GIMP work fine as well. However, resizing the files using the code below causes the textures to have jaggies near any transparent pixels, even if percent is 1.0. Any idea what it is that I'm unwittingly changing about these images that suddenly causes the shader's bug to present itself?
NSImage* originalImage = [[NSImage alloc] initWithData:[currentFile regularFileContents]];
NSSize newSize = NSMakeSize([originalImage size].width * percent, [originalImage size].height * percent);
NSImage* resizedImage = [[NSImage alloc] initWithSize:newSize];
[resizedImage lockFocus];
[originalImage drawInRect:NSMakeRect(0,0,newSize.width,newSize.height)
fromRect:NSMakeRect(0,0,[originalImage size].width, [originalImage size].height)
operation:NSCompositeCopy fraction:1.0];
[resizedImage unlockFocus];
NSBitmapImageRep* bits = [[[NSBitmapImageRep alloc] initWithCGImage:[resizedImage CGImageForProposedRect:nil context:nil hints:nil]] autorelease];
NSData* data = [bits representationUsingType:NSPNGFileType properties:nil];
NSFileWrapper* newFile = [[[NSFileWrapper alloc] initRegularFileWithContents:data] autorelease];
[newFile setPreferredFilename:currentFilename];
[folder removeFileWrapper:currentFile];
[folder addFileWrapper:newFile];
[originalImage release];
[resizedImage release];
I typically set image interpolation to high when doing these kinds of resizing operations. This may be your issue.
[resizedImage lockFocus];
[NSGraphicsContext saveGraphicsState];
[[NSGraphicsContext currentContext] setImageInterpolation:NSImageInterpolationHigh];
[originalImage drawInRect:...]
[NSGraphicsContext restoreGraphicsState];
[resizedImage unlockFocus];
Another thing to make sure you're doing, though it may not help (see below):
[[NSGraphicsContext currentContext] setShouldAntialias:YES];
This may not fix it because you can't anti-alias without knowing the target background. But it still might help. If this is the problem (that you can't anti-alias this soon), you may have to composite this resizing at the point that you're ready to draw the final image.
What is the DPI of your source PNG? You are creating the second image by assuming that the original image's size is in pixels, but size is in points.
Suppose you have an image that is 450 pixels by 100 pixels, with DPI of 300. That image is, in real world units, 1 1/2 inches x 1/3 inches.
Now, points in Cocoa are nominally 1/72 of an inch. The size of the image in points is 108 x 24.
If you then create a new image based on that size, there's no DPI specified, so the assumption is one pixel per point. You're creating a much smaller image, which means that fine features are going to have to be approximated more coarsely.
You will have better luck if you pick one of the image reps of the original image and use its pixelsWide and pixelsHigh values. When you do this, however, the new image will have a different real world size than the original. In my example, the original was 1 1/2 x 1/3 inches. The new image will have the same pixel dimensions (450 x 100) but at 72 dpi, so it will be 6.25 x 1.39 inches. To fix this, you'll need to set the size of the new bitmap rep in points to the size of the original in points.
Related
In macOS programming, We know that
Quartz uses a coordinate space where the origin (0, 0) is at the top-left of the primary display. Increasing y goes down.
Cocoa uses a coordinate space where the origin (0, 0) is the bottom-left of the primary display and increasing y goes up.
Now am using a Quartz API - CGImageCreateWithImageInRect to crop an image , which takes a rectangle as a param. The rect has the Y origin coming from Cocoa's mousedown events.
Thus i get crops at inverted locations...
I tried this code to flip my Y co-ordinate in my cropRect
//Get the point in MouseDragged event
NSPoint currentPoint = [self.view convertPoint:[theEvent locationInWindow] fromView:nil];
CGRect nsRect = CGRectMake(currentPoint.x , currentPoint.y,
circleSizeW, circleSizeH);
//Now Flip the Y please!
CGFloat flippedY = self.imageView.frame.size.height - NSMaxY(nsRectFlippedY);
CGRect cropRect = CGRectMake(currentPoint.x, flippedY, circleSizeW, circleSizeH);
But for the areas on the top, i wrong FlippedY coordinates.
If i click near top edge of the view, i get flippedY = 510 to 515
At the top edge it should be between 0 to 10 :-|
Can someone point me to the correct and reliable way to Flip
the Y coordinate in such circumstances? Thank you!
Here is sample project in GitHub highlighting the issue
https://github.com/kamleshgk/SampleMacOSApp
As Charles mentioned, the Core Graphics API you are using requires coordinates relative to the image (not the screen). The important thing is to convert the event location from window coordinates to the view which most closely corresponds to the image's location and then flip it relative to that same view's bounds (not frame). So:
NSView *relevantView = /* only you know which view */;
NSPoint currentPoint = [relevantView convertPoint:[theEvent locationInWindow] fromView:nil];
// currentPoint is in Cocoa's y-up coordinate system, relative to relevantView, which hopefully corresponds to your image's location
currentPoint.y = NSMaxY(relevantView.bounds) - currentPoint.y;
// currentPoint is now flipped to be in Quartz's y-down coordinate system, still relative to relevantView/your image
The rect you pass to CGImageCreateWithImageInRect should be in coordinates relative to the input image's size, not screen coordinates. Assuming the size of the input image matches the size of the view to which you've converted your point, you should be able to achieve this by subtracting the rect's corner from the image's height, rather than the screen height.
I am setting the .contents of a CALayer to a CGImage, derived from drawing into an NSBitMapImageRep.
As far as I understand from the docs and WWDC videos, setting the layer's .contentsCenter to an NSRect like {{0.5, 0.5}, {0, 0}}, in combination with a .contentsGravity of kCAGravityResize should lead to Core Animation resizing the layer by stretching the middle pixel, the top and bottom horizontally, and the sides vertically.
This very nearly works, but not quite. The layer resizes more-or-less correctly, but if I draw lines at the edge of the bitmap, as I resize the window the lines can be seen to fluctuate in thickness very slightly. It's subtle enough to be barely a problem until the resizing gets down to around 1/4 of the original layer's size, below which point the lines can thin and disappear altogether. If I draw the bitmaps multiple times at different sizes, small differences in line thickness are very apparent.
I originally canvassed a pixel-alignment issue, but it can't be that because the thickness of the stationary LH edge (for example) will fluctuate as I resize the RH edge. It happens on 1x and 2x screens.
Here's some test code. It's the updateLayer method from a layer-backed NSView subclass (I'm using the alternative non-DrawRect draw path):
- (void)updateLayer {
id image = [self imageForCurrentScaleFactor]; // CGImage
self.layer.contents = image;
// self.backingScaleFactor is set from the window's backingScaleFactor
self.layer.contentsScale = self.backingScaleFactor;
self.layer.contentsCenter = NSMakeRect(0.5, 0.5, 0, 0);
self.layer.contentsGravity = kCAGravityResize;
}
And here's some test drawing code (creating the image supplied by imageForCurrentScaleFactor above):
CGFloat width = rect.size.width;
CGFloat height = rect.size.height;
NSBitmapImageRep *imageRep = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes: NULL
pixelsWide: width * scaleFactor
pixelsHigh: height * scaleFactor
bitsPerSample: 8
samplesPerPixel: 4
hasAlpha: YES
isPlanar: NO
colorSpaceName: NSCalibratedRGBColorSpace
bytesPerRow: 0
bitsPerPixel: 0];
[imageRep setSize:rect.size];
[NSGraphicsContext saveGraphicsState];
NSGraphicsContext *ctx = [NSGraphicsContext graphicsContextWithBitmapImageRep:imageRep];
[NSGraphicsContext setCurrentContext:ctx];
[[NSColor whiteColor] setFill];
[NSBezierPath fillRect:rect];
[[NSColor blackColor] setStroke];
[NSBezierPath setDefaultLineWidth:1.0f];
[NSBezierPath strokeRect:insetRect];
[NSGraphicsContext restoreGraphicsState];
// image for CALayer.contents is now [imageRep CGImage]
The solution (if you're talking about the problem I think you're talking about) is to have a margin of transparent pixels forming the outside edges of the image. One pixel thick, all the way around, will do it. The reason is that the problem (if it's the problem I think it is) arises only with visible pixels that touch the outside edge of the image. Therefore the idea is to have no visible pixels touch the outside edge of the image.
I have found a practical answer, but would be interested in comments filling in detail from anyone who knows how this works.
The problem did prove to be to do with how the CALayer was being stretched. I was drawing into a bitmap of arbitrary size, on the basis that (as the CALayer docs suggest) use of a .contentsCenter with zero width and height would in effect do a nine-part-image stretch, selecting the single centre pixel as the central stretching portion. With this bitmap as a layer's .contents, I could then resize the CALayer to any desired size (down or up).
Turns out that the 'artibrary size' was the problem. Something odd happens in the way CALayer stretches the edge portions (at least when resizing down). By instead making the initial frame for drawing tiny (ie. just big enough to fit my outline drawing plus a couple of pixels for the central stretching portion), nothing spurious makes its way into the edges during stretching.
The bitmap stretches properly if created with rect just big enough to fit the contents and stretchable center pixel, ie.:
NSRect rect = NSMakeRect(0, 0, lineWidth * 2 + 2, lineWidth * 2 + 2);
This tiny image stretches to any larger size perfectly.
In my NSView subclass in drawRect I stroke a number of NSBezierPaths. I would like the lines drawn as a result of these strokes to have the exact same with, preferably just a couple of pixels wide no matter the scaling of the view. Here's my drawRect:
- (void)drawRect:(NSRect)dirtyRect
{
NSSize x = [self convertSize:NSMakeSize(1,1) fromView:nil];
printf("size = %f %f\n", x.width, x.height);
for(NSBezierPath *path in self.paths) {
[path setLineWidth:x.width];
[path stroke];
}
}
Here's a screenshot of what I am seeing:
(source: crb at www.sonic.net)
Can anyone suggest how I can get the crisp consistant path outlines that I am looking for?
Thanks.
Try to match the exact pixels of the device. (more difficult since iphone 5)
Do not use coordinates with on half points: like 0.5 (The work on retina, but on "non retina" they are unsharp).
Th eline width goes half to the left / or up, half to the right.
So if you have a lineWidth of 2 and coorinates at integer values it should be sharp.
I'm using OpenGL to draw in Mac OS. When my code runs on Retina display everything works fine except text drawing.
Under Retinal display the text is twice as big as it should be. It happens because the font size is in points and each point is 2 pixels under Retina, but OpenGL is pixel based.
Here is the correct text drawing under standard display:
Here is the incorrect text drawing under Retina display:
Here is how I normaly draw strings. Since OpenGL does not have text drawing functions, in order to draw text I do the following:
Get the font:
NSFontManager fontManager = [NSFontManager sharedFontManager];
NSString font_name = [NSString stringWithCString: "Helvetica" encoding: NSMacOSRomanStringEncoding];
font = [fontManager fontWithFamily: font_name traits:fontStyle weight:5 size:9];
attribs = [[NSMutableDictionary dictionaryWithCapacity: 3] retain];
[attribs setObject:font forKey:NSFontAttributeName];
Create and measure the string:
NSString* aString = [NSString stringWithCString: "blah blah" encoding: NSMacOSRomanStringEncoding];
NSSize frameSize = [aString sizeWithAttributes: m_attribs];
Allocate NSImage with the size:
NSImage* image = [[NSImage alloc] initWithSize:frameSize];
[image lockFocus];
Draw the string into the image:
[aString drawAtPoint:NSMakePoint (0, 0) withAttributes:m_attribs];
Get the bits:
NSBitmapImageRep* bitmap = [[NSBitmapImageRep alloc] initWithFocusedViewRect:NSMakeRect (0.0f, 0.0f, frameSize.width, frameSize.height)];
[image unlockFocus];
Create OpenGL texture:
GLuint texture = 0;
glGenTextures(1, &texture);
glTexImage2D(GL_TEXTURE_RECTANGLE_EXT, 0, GL_RGBA, GLsizei(frameSize.width), GLsizei(frameSize.height), 0, GL_RGBA, [bitmap bitmapData]);
Draw the texture:
glBindTexture ….
other OpenGL drawing code
My question is how to get NSString to draw in pixel resolution not in points.
I tried the following:
Draw at half the point size: 4.5 instead of 9. This gives me the correct size but the text is drawn blurry.
Draw at point size and shrink the texture to half the size in OpenGL, again this does not give good looking results:
OpenGLs coordinate system is in fact point based not pixel based, but you are the one who decides what those points are. A context defines the coordinate system in 2D by the glOrtho function (or you could construct an ortho matrix by hand) which sets up the min and max x,y coordinates on the screen. For example an orthographic projection could be setup so that 0 was on the left of the screen and 100 was on the right regardless of the size of the framebuffers you are rendering into.
The font texture appears to be created just fine. The problem is that you are rendering it on geometry twice as large as it needs to be. In OpenGL, texture size does not effect the size of the object rendered on your screen. The size on screen is defined by the geometry passed to glDrawArrays or glBegin etc not the texture.
I think the problem is that you are using the pixel size of the font texture to define the quad size used to render on screen. This would put your problem in the "other OpenGL Drawing code" section. To fix that you could apply some sort of scale factor to the drawing. In retina mode the scale factor to 0.5 and for normal screens it would be 1.0 (UIKit uses a similar idea to render UIView content)
The quad calculation could look something like this:
quad.width = texture.width * scaleFactor
quad.height = texture.height * scaleFactor
Another option would be to separate the quad rendering size completely from the texture. If you had a function or a class for drawing text it could have a font size parameter which it would use as the actual quad size instead of using the texture size.
For my retina display, I had the problem of my framebuffer not fitting the actual window (the full buffer is rendered to a quarter of the window). In that case, using a doubled viewport solves the problem.
# Instead of actual window size width*height,
# double the dimensions for retina display
glViewport(0, 0, width*2, height*2)
In your case (this part is only an assumption, since I cannot run your code), changing the frame sizes that you are passing to gl for texture creation can do the trick.
GLsizei(frameSize.width*2), GLsizei(frameSize.height*2)
Given that you have an NSBitmapImageRep and get pixel raster data from that, you should be using bitmap.pixelsWide and bitmap.pixelsHigh, not frameSize when creating the texture from the bitmap.
I have an NSBezierPath that makes a rounded rectangle but the corners of it look choppy and appear brighter that the rest of the stroke when viewed at full scale. My code is:
NSBezierPath *path = [NSBezierPath bezierPath];
[path appendBezierPathWithRoundedRect:NSMakeRect(0, 0, [self bounds].size.width, [self bounds].size.height) xRadius:5 yRadius:5];
NSGradient *fill = [[NSGradient alloc] initWithColorsAndLocations:[NSColor colorWithCalibratedRed:0.247 green:0.251 blue:0.267 alpha:0.6],0.0,[NSColor colorWithCalibratedRed:0.227 green:0.227 blue:0.239 alpha:0.6],0.5,[NSColor colorWithCalibratedRed:0.180 green:0.188 blue:0.196 alpha:0.6],0.5,[NSColor colorWithCalibratedRed:0.137 green:0.137 blue:0.157 alpha:0.6],1.0, nil];
[fill drawInBezierPath:path angle:-90.0];
[[NSColor lightGrayColor] set];
[path stroke];
Heres a picture of 2 of the corners (Its not as obvious in a small picture):
Anyone know what's causing this? Am I just missing something?
Thanks for any help
The straight lines of the roundrect are exactly on the borders of the view, so half the width of each line is getting cut off. (As if they were on a subpixel.)
Try changing
NSMakeRect(0, 0, [self bounds].size.width, [self bounds].size.height)
to
NSMakeRect(0.5, 0.5, [self bounds].size.width - 1, [self bounds].size.height - 1)
If an NSBezierPath ever looks a bit weird or blurry, try shifting it over half a pixel.
Take a look at the setFlatness: method in the NSBezierPath docs. It controls how smooth rendered curves are. I believe setting it to a smaller number (the default being .6) will yield smoother curves, at the cost of more computation (though for simple paths, I doubt it matters a whole lot).