NSColor colorWithPatternImage: and scaling (NSAffineTransform) - cocoa

So the AppleDocs state that:
The image to use as the pattern for the color object. The image is tiled starting at the bottom of the window. The image is not scaled.
(NSColor Class Reference)
Well, except the pattern sort of does scale..
At least when drawing into a graphics context that is scaled up/down using CAAffineTransform.
However, the pattern is scaled by some lesser degree than the actual graphics, e.g. when drawing a filled NSBezierPath object and scaling the context up/down the pattern is just slightly scaled up/down, not by the same factor.
This is observed on OS X 10.6 - any ideas if that's a bug or some documented behaviour?
Ideally we'd like to use the same pattern scale factor no matter how big or small the scaled graphic (rect, bezierpath, ..) is.

Related

Direct2D: Check if image is outside visible area before drawing?

Is it a reasonable optimization to omit calls to ID2D1HwndRenderTarget::DrawBitmap() if the image will end up outside the visible area? If I implement the checking logic in the application that will cost some performance, so if the first thing D2D does is doing the same check then I'd rather not do it.
I had a test with my application which renders some UI part using Direct2D (and attaching renderdoc), seems it is a bit random.
I render a mix of Rectangles, Text, Path geometries (beziers) and Rectangle with a bitmap brush (which should be equivalent to your DrawBitmap call).
Then I capture a frame with all those objects visible, and another one panning my UI (using transform) so objects are not visible.
From there could check what is drawn or not:
Text is always culled
Solid color rectangles are not culled
Most of the times path geometries are culled, but sometimes not.
Rectangle with bitmap brush are NEVER culled
So it seems Direct2D is making different decisions depending on the types of elements you plan to draw.
Since rectangles are easily batched and cheap to draw, it seems that they are just drawn regardless.
Bitmap rectangles and text require more work, so it seems they are effectively culled.
Path geometry was looking to depend on how many polygon the geometry is tesselated to (I had a path that was translating to 26 primitives and it was not culled, another one translating to 120 and it got culled).
So you can either trust Direct2D that it will perform that optimization, but I would personally implement a quick rectangle to rectangle check just in case (it's not gonna hurt your performances as its an extremely simple operation).

SkiaSharp Text Size on Xamarin Forms

How does the TextSize property on an SKPaint object relate to the 'standard' Xamarin Forms FontSize?
In the image you can see the difference between size 40 on a label and as painted. What would I need to do to make them the same size?
As #hankide mentioned, it has to do with the fact that the native OS has scaling for UI elements so the app "looks the same size" on different devices.
This is great for buttons and all that as the OS is drawing them. So if the button is bigger, the OS just scales up the text. However, with SkiaSharp, we have no idea what you are drawing so we can't do any scaling. If we were to scale, the image would become blurry or pixelated on the high resolution screens.
One way to get everything the same size is to do a global scale before drawing anything:
var scale = canvasWidth / viewWidth;
canvas.Scale(scale);
And this is often good enough, but sometimes you really want to draw items differently on a high resolution screen. An example would be a tiled background. Instead of stretching the image on a bigger canvas, you may want to just tile it - preserving the pixels.
In the case of this question, you can either scale the entire canvas before drawing, or you can just scale the text:
var paint = new SKPaint {
TextSize = 40 * scale
};
This way, the text size is increased, but the rest of the drawing is on a larger canvas.
I have an example on GitHub: https://github.com/mattleibow/SkiaSharpXamarinFormsDemo
This compares Xamarin.Forms, SkiaSharp and Native labels. (They should all be exactly the same size)
I think that the problem is in the way Xamarin.Forms handles font sizes. For example on Android, you could define the font size in pixels (px), scale-independent pixels (sp), inches (in), millimeters and density-independent pixels (dp/dip).
I can't remember how Xamarin.Forms handles the sizes (px,sp or dp) but the difference you see here is because of that. What you could do, is create an Effect that changes the font size handling on the native control and try to match the sizing provided by SkiaSharp.

LÖVE viewport like Libgdx

I wonder if LÖVE framework has the same feature like Libgdx's viewport, because this feature were really great when I used Libgdx and I wonder if there's anything similar to do in LÖVE.
About viewports: https://github.com/libgdx/libgdx/wiki/Viewports
If, by viewport, you mean using normalised coordinates (resolution-independant), then yes, LÖVE can do that.
Although it's not available by default in the framework itself, there's always a possibility to add your own features.
You could make a Viewport system using LÖVE's canvases.
Start by creating a canvas with fixed dimensions,
then make your game using percentages of these dimensions instead of regular pixel positioning.
For example, player.x = 80 (left side of the screen) becomes player.x = canvas:getWidth()*.1
Once you've drawn everything into your virtual window -that is- the canvas, you can scale it and render your game to fit any window resolution.
I suggest that you take a look at this library that handles all the scaling stuff for you, once you provide your game's virtual dimensions.

Simulate the effect of device back-light to image perception of humans

Even if we see the exact same image in a device (e.g. iPad), we perceive it different when the back-light is different. For example if we look at the following two images, they are both same image but the latter one has no back-light (disregard the reflections), and we perceive it different. My question is how can I simulate the effect of no back-light, without actually dimming it but playing with the original image? Maybe applying some kind of semi-transparent black mask?
Full backlight
No backlight
Yes, you can simulate it. Physically it's a very simple effect, and only your eyes make it look like a more complicated illusion.
It's just a combination of two layers:
the photo (backlight)
the reflection (no backlight image)
The reflection simply exists all the time. The backlight image is turned on or off. In terms of implementation these are additive layers (sum of pixel values).
Eyes only perceive backlight on/off as a complete change of the image, because the eyes adjust to overall brightness level of the screen.
If you're implementing that in code:
make sure you use linear light colorspace for processing (remove gamma correction, process pixels, apply gamma correction).
when displaying the image on screen, normalize brightness (since to display the effect on screen you have to have it brighter than the actual real-world effect, and you have lower dynamic range to work with).

How to get a 1 pixel line with NSBezierPath?

I'm developing a custom control. One of the requirements is to draw lines. Although this works, I noticed that my 1 pixel wide lines do not really look like 1 pixel wide lines - I know, they're not really pixels but you know what I mean. They look more like two or three pixels wide. This becomes very apparent when I draw a dashed line with a 1 pixel dash and a 2 pixel gap. The 1 pixel dashes actually look like tiny lines in stead of dots.
I've read the Cocoa Drawing documentation and although Apple mentions the setLineWidth method, changing the line width to values smaller than 1.0 will only make the line look more vague and not thinner.
So, I suspect there's something else influencing the way my lines look.
Any ideas?
Bezier paths are drawn centered on their path, so if you draw a 1 pixel wide path along the X-coordinate, the line actually draws along Y-coordinates { -0.5, 0.5 } The solution is usually to offset the coordinate by 0.5 so that the line is not drawn in the sub pixel boundaries. You should be able to shift your bounding box by 0.5 to get sharper drawing behavior.
Francis McGrew already gave the right answer, but since I did a presentation on this once, I thought I'd add some pictures.
The problem here is that coordinates in Quartz lie at the intersections between pixels. This is fine when filling a rectangle, because every pixel that lies inside the coordinates gets filled. But lines are technically (mathematically!) invisible. To draw them, Quartz has to actually draw a rectangle with the given line width. This rectangle is centered over the coordinates:
So when you ask Quartz to stroke a rectangle with integral coordinates, it has the problem that it can only draw whole pixels. But here you see that we have half pixels. So what it does is it averages the color. For a 50% black (the line color) and 50% white (the background) line, it simply draws each pixel in grey:
This is where your washed-out drawings come from. The fix is now obvious: Don't draw between pixels, and you achieve that by moving your points by half a pixel, so your coordinate is centered over the desired pixel:
Now of course just offsetting may not be what you wanted. Because if you compare the filled variant to the stroked one, the stroke is one pixel larger towards the lower right. If you're e.g. clipping to the rectangle, this will cut off the lower right:
Since people usually expect the rectangle to stroke inside the specified rectangle, what you usually do is that you offset by 0.5 towards the center, so the lower right effectively moves up one pixel. Alternately, many drawing apps offset by 0.5 away from the center, to avoid overlap between the border and the fill (which can look odd when you're drawing with transparency).
Note that this only holds true for 1x screens. 2x Retina screens actually exhibit this problem differently, because each of the pixels below is actually drawn by 4 Retina pixels, which means they can actually draw the half-pixels. However, you still have the same problem if you want a sharp 0.5pt line. Also, since Apple may in the future introduce other Retina screens where e.g. every pixel is made up of 9 Retina pixels (3x), or whatever, you should really not rely on this. Instead, there are now API calls to convert rectangles to "backing aligned", which does this for you, no matter whether you're running 1x, 2x, or a fictitious 3x.
PS - Since I went to the hassle of writing this all up, I've put this up on my web site: http://orangejuiceliberationfront.com/are-your-rectangles-blurry-pale-and-have-rounded-corners/ where I'll update and revise this description and add more images.
The answer is (buried) in the Apple Docs:
"To avoid antialiasing when you draw a one-point-wide horizontal or vertical line, if the line is an odd number of pixels in width, you must offset the position by 0.5 points to either side of a whole-numbered position"
Hidden in Drawing and Printing Guide for iOS: iOS Drawing Concepts, though nothing that specific to be found in the current, standard (OS X) Cocoa Drawing Guide..
As for the effects of invoking setDefaultLineWidth: the docs also state that:
"A width of 0 is interpreted as the thinnest line that can be rendered on a particular device. The actual rendered line width may vary from the specified width by as much as 2 device pixels, depending on the position of the line with respect to the pixel grid and the current anti-aliasing settings. The width of the line may also be affected by scaling factors specified in the current transformation matrix of the active graphics context."
I found some info suggesting that this is caused by anti aliasing. Turning anti aliasing off temporarily is easy:
[[NSGraphicsContext currentContext] setShouldAntialias: NO];
This gives a crisp, 1 pixel line. After drawing just switch it on again.
I tried the solution suggested by Francis McGrew by offsetting the x coordinate with 0.5, however that did not make any difference to the appearance of my line.
EDIT:
To be more specific, I changed x and y coordinates individually and together with an offset of 0.5.
EDIT 2:
I must have done something wrong, as changing the coordinates with an offset of 0.5 actually does work. The end result is better than the one obtained by switching off the anti aliasing so I'll make Francis MsGrew's answer the accepted answer.

Resources