My application draws some images using cairo like this:
cairo_set_source_surface(cr, _page_down_icon, icon_x, y);
cairo_paint(cr);
where the page down icon is a png I loaded via cairo_image_surface_create_from_png.
This works fine on standard screens but produces a low quality image on retina displays. So I'm thinking of having a second image with double resolution (as it is usual for NSImage). However, if I just draw this image the result is twice as large as the standard image. So my question is: how would I draw the highres image with cairo on a retina display so that it looks crisp?
cairo_scale is your friend. With this method you can adjust the scaling of the axes of your surface. In order to get the result you want youd scale by 0.5 for your second image (not that you'll have to adjust the targetposition for the image as well!).
Related
I want to draw a series of textures into METAL view in order to present a complete image. On a regular screen, the images are presented on exactly 1:1 scale (Meaning a 100x100 pixels texture will be presented in a 100x100 pixels square)
Drawing it on a retina display, will actually give me a 200x200 square.
Now, there may be 2 different approaches:
1) Generate entire image into 100x100 square and let Metal View to upscale it to 200x200 square - It works.
2) Upscale each texture and generate image directly into 200x200 square. Why to take this approach? Because some of the textures (Like texts) are generated dynamically, and can be generated in a better resolution. Something impossible if you take the first approach.
Unfortunately, in this approach, some ugly square is visible around each texture.
I tried to play with sizes, clamp options etc, yet I could not find any solution.
Any help would be highly appreciated!
Image from regular screen
Image from retina screen
Found a solution. In Fragment shader, texture sampler was defined as:
constexpr sampler s = sampler(coord::normalized, address::repeat, filter::nearest);
instead of:
constexpr sampler s = sampler(coord::normalized, address::clamp_to_edge, filter::nearest);
As modern macOS devices choose to use a scaled HiDPI resolution by default, bitmap images get blurred on screen. Is there a way to render a bitmap pixel by pixel to the true native physical pixels of the display screen? Any CoreGraphics, OpenGL, or metal API that would allow this without change the display mode of the screen?
If you are thinking of those convertXXXXToBacking and friends, stop. Here is the explanation for you. A typical 13 in MacBook pro now has native 2560x1600 pixel resolution. The default recommended screen resolution is 1440x900 after fresh macOS install. The user can change it to 1680x1050 via System Preferences. In either 1440x900 or 1680x1050 case, the backingScaleFactor is exactly 2. The typical rendering route would render anything first to the unphysical 2880x1800 or 3360x2100 resolution and the OS/GPU did the final resampling with an unknown method.
Your best bet is probably Metal. MTKView's drawableSize claims that the default is the view's size "in native pixels". I'm not certain if that means device pixels or backing store pixels. If the latter, you could turn off autoResizeDrawable and set drawableSize directly.
To obtain the display's physical pixel count, you can use my answer here.
You can also try using Core Graphics with a CGImage of that size drawn to a rect of the screen size in backing store pixels. It's possible that all of the scaling will be internally cancelled out.
If these are resource images, use asset catalogs to provide x2 and x3 resolution version of your images, icons, etc. Classes like NSImageView will automatically select the best version for the display resolution.
If you just have some random image and you want it draw at the resolution of the display, get the backingScaleFactor of the view's window. If it's 2.0 then drawing a 200 x 200 pixel image into a 100 x 100 coordinate point rectangle will draw it at 1:1 native resolution. Similarly, if the scale factor is 3.0, a 300 x 300 pixel image will draw into a 100 x 100 coordinate point rectangle.
Not surprisingly, Apple has an extensive guide on this subject: High Resolution Guidelines for OS X
I use Core Text to draw text to an offscreen bitmap context using CTLineDraw(). The bitmap is then processed internally before it is drawn to my window.
The problem here is that bitmap contexts aren't scaled on Retina Macs. Thus, on a Retina Mac, the text is still drawn at 72dpi to the bitmap but it should be drawn in 144dpi of course, because the pixel density is twice as high. Thus, the text currently looks blurry because it is drawn at 72dpi to the offscreen bitmap and this bitmap is then scaled when it is drawn to the window.
What is the best way to make Core Text Retina-aware in this context? Should I simply pass a transformation matrix to CTFontCreateWithName() that contains the screen's backingScaleFactor in its scale coefficients? That does look a little hackish, though. That's why I'm asking for some feedback or a better idea...
I tried using the new PDF feature of XCode that basically scales the image to 1x, 2x, and 3x. Unfortunately I'm also using Spritekit, so I'd rather use SKTextureAtlases than the Asset Catalog.
My problem is that the rasterized version of the pdf looks better than any exports from Adobe Illustrator (or Photoshop using Smart Objects).
Here's a link to an Imgur album with examples.
Specifically, the image exported from Illustrator is in 2 square sizes: 60px and 90px. The images in Xcode all have the same name but are in two different atlases: atlas#2x.atlas and atlas#3x.atlas. The PDF was exported at 30px square from Illustrator and then Xcode scales it to the 2x and 3x versions.
So why does the Xcode version look sharper (especially around the junction between the rounded corner and the flat side)?
I think this could be a resolution issue: Xcode does not have the resolution needed so it enlarge an image causing glitches.
When a SKSpriteNode is created without indication of size, the size of the texture is used. Thus if you SKSpriteNode have a size of 30x30 points, you have to bring a 60x60 pixels image for #2x and a 90x90 pixels image for #3x.
This also maybe be due to settings in Illustrator.
To have a true comparison on screen, you can display two SKSpriteNode with the same size of 30x30 points:
the first of them have it's texture from atlas generated by Illustrator
the seconde have it's texture from an image from assets created from PDF feature of XCode
Note that for this test you don't even need an atlas as atlas is intend for rendering optimizations.
I've read the book Retinafy Me. This basically says double the size of images. When it is then displayed on a retina screen at half the image size it will look great.
My problem is that the original images I have can't be doubled in size. i.e. the images are 750px wide. They are to be displayed 500px wide. What do I do? Is a x1.5 image better looking that a x1 image (on a retina screen) or is it just needlessly adding to the file size?
I've tried using the x1.5 images (750px scaled down to 500px) and the images looked good on a retina screen in the Apple store where I checked them out. But I couldn't do a definative comparison, I don't have a retina screen of my own and I can't find any info about it anywhere.
A pixel is still a pixel on a retina screen. The difference is simply that a retina screen uses 4 pixels to display one, if they have no additional information such as more pixels. This is why the suggestion is to double the image size.
Essentially your image will be rendered by stretching it to 1000px wide. When stretched, a 750px wide image will still look better than a 500px wide image. You can try this for yourself in Paint.
There is a pretty good explanation of how it works on http://crypt.codemancers.com/posts/2013-11-03-image-rendering-on-hd-screens/
Hope this helps.