Why is clipping considered to be "too slow with cairo"? - user-interface

I picked up working on updating the classic GNOME Clearlooks theme (originally GTK2), featured prominently in Fedora 14, for GTK3 by forking the outdated Clearlooks-Phenix project.
I've never worked on any GTK3 theme before so I came in with some false assumptions, namely that clipping rules for the CSS stylesheets would be be consistent with how a browser handles them.
One of these assumptions led me to file a bug report which was closed with the response:
Yes, clipping is not being done in gtk3. It's too slow with cairo.
2D clipping has been a staple of accelerated graphics since the days of Windows 3.1. Cairo can already take advantage of display hardware acceleration when available, with many of its demos prominently using this feature.
Clipping to me seems like it should be so fundamentally basic on today's modern hardware that it should be effectively free. In what situations could it be considered slow enough to disable it selectively or entirely (some regions of GTK elements are seemingly clipped or overdrawn, I don't know which)? Is this something fundamental to Cairo, as it was mentioned specifically.

Just as Timm Bäder wrote: Clipping with something that cuts pixels in half is complicated. (For example: A circle is more complicated than a rectangle that fits to the pixel grid.)
Sure, clipping so that only whole pixels are painted to can speed things up since less pixels need to be touched. However, a clip path that contains only 20% of a pixel means that some interpolation with the current value of the pixel is required.
Simple example: Paint a pixel white. pixel = white when the pixel is just set to white. But when only 20% of the pixel is to be drawn, you end up with something like pixel = white * 0.2 + pixel * 0.8, which is much more complicated.

Related

Querying graphics capabilities for deciding whether to apply GPU-intensive effects (through SpriteKit)

I have a game written with SpriteKit which uses a SKEffectNode with blur effect to blur a set of sprites, one of which has a fairly large texture, and which together cover a fairly large area of the screen. An iMac and Mac Book Pro cope quite happily with this but on a more humble Mac Book there is a notable drop in frame rate with the effect node added in. Since the effect isn't crucial to the functionality of the game, I could simply not add the SKEffectNode for machines with less powerful graphics capabilities.
So then the question: what would be a good programmatic check that I could make to determine the "power of the GPU" or "performance when applying texture effects" or [suggest better metric here] and via what API? Thanks for your suggestions!
You'll have to create a performance test using your actual blurring processes and some sample content to get an accurate idea of the time cost of it on each generation of hardware.
Blurs are really weird things, programmatically. A Box Blur can give you most of the appearance of a nice, soft gaussian blur for much less processing cost. A zoom or motion blur (that looks good) is surprisingly expensive, even on strong hardware.
And there's some amazingly effective "cheats" when doing blurs. Because there's no need for detail you can heavily optimise the operations, particularly if the blurs are strong.
Apple, it's believed, does something like this, for example, with its blurs:
Massively shrink the target image
Do a gaussian blur on this tiny image
Scale it back up, somewhat
Apply a cheap Box Blur to soften it
Fully scale back to the desired size
By way of terrible example benefitting from scaling well (with filtering set for good scaling)
This is the full sized image blurred:
And here's a version of the same image, scaled to a 16th of its original size, blurred, and then the blurred image scaled back up. As you can see, due to the good scaling and lack of detail, there's hardly any difference in the blurred image, but the blur takes MUCH less processing energy and time:

PNG images look blurry when scaled

I'm a visual/UI designer working on a project/product which has been designed by another designer. This designer provided the front-end dev with good quality PNG icons, but when the front-end dev sets the images scale to 0.7, they look blurry.
I've noticed that if we set the image's scale to 0.5, they don't look blurry at all:
0.7: [1]: http://i.stack.imgur.com/jQNYG.png
0.5: [2]: http://i.stack.imgur.com/hBShu.png
Anyone knows why does that happen?
I personally always work with 0.5 scales because I was taught so. Is there any logical reason for this?
Sorry if the answer is obvious. I am really curious about that. Thanks in advance.
What is happening largely depends upon the software that you are using to shrink the image. There is a major different between reducing by 0.5 and 0.7.
If you shrink by 0.5, you are combining 4 pixels into one.
If you shrink by 0.7 you are doing fractional sampling. 10 pixels in each direction get reduced to 7.
In 0.5 sampling, you read two pixels across, read two pixels down.
In 0.7 sampling you read 1.42857142857143 pixels in each direction. In order to do that you have to weight pixel values. That is going to create blurriness in a drawing.
It's because when you halve an image's size (in both dimensions), you effectively are combining exactly 4 pixels into one. However when you do a slightly off scale (such as 0.7) you have one and a fraction of a pixel going into one pixel (in each dimension). This means the data from one pixel is being used in up to 4 pixels, instead of one, causing a blurry effect.
Sorry, making an example image would be quite difficult for me, but I hope you get the concept.
I think this has to do with interpolation, when you resize an image there is no way of knowing what is supposed to be in-between the two pixels that are essentially being merged. What the computer tries to do is guess what the new pixel is supposed to look like by looking at the pixel around it and combining the values.
So for example in the image above it will go what is in between white and orange? a less bright orange. OK lets make the merged pixel look like that. When you get to a corner there might be more orange so the new pixel will look more orangey, you get the point.
Now when you scale by 0.5 the computer looks at the pixels and merges all the pixels together at a constant rate. What I mean by that is if you look at an image and try to divide it in half you will always merge 4 pixels together however if you scale by 0.7 your merging an irregular amount of pixels resulting in different concentrations of pixels as the image is scaled which results in a blurry image.
If you don't understand this I understand, I kinda went off on a tangent.... if you need more clarification comment bellow :)
Add an .img-crisp class to the image:
.img-crisp {
image-rendering: -moz-crisp-edges; /* Firefox */
image-rendering: -o-crisp-edges; /* Opera */
image-rendering: -webkit-optimize-contrast; /* Webkit (non-standard naming) */
image-rendering: crisp-edges;
-ms-interpolation-mode: nearest-neighbor; /* IE (non-standard property) */
}
The image-rendering CSS property sets an image scaling algorithm. The
property applies to an element itself, to any images set in its other
properties, and to its descendants.
Source.

Algorithm to detect the change in visible luminosity in an image

I want a formula to detect/calculate the change in visible luminosity in a part of the image,provided i can calculate the RGB, HSV, HSL and CMYK color spaces.
E.g: In the above picture we will notice that the left side of the image is more bright when compared to the right side , which is beneath a shade.
I have had a little think about this, and done some experiments in Photoshop, though you could just as well use ImageMagick which is free. Here is what I came up with.
Step 1 - Convert to Lab mode and discard the a and b channels since the Lightness channel holds most of the brightness information which, ultimately, is what we are looking for.
Step 2 - Stretch the contrast of the remaining L channel (using Levels) to accentuate the variation.
Step 3 - Perform a Gaussian blur on the image to remove local, high frequency variations in the image. I think I used 10-15 pixels radius.
Step 4 - Turn on the Histogram window and take a single row marquee and watch the histogram change as different rows are selected.
Step 5 - Look out for a strongly bimodal histogram (two distimct peaks) to identify the illumination variations.
This is not a complete, general purpose solution, but may hold some pointers and cause people who know better to suggest improvememnts for you!!! Note that the method requires the image to have a some areas of high uniformity like the whiteish horizontal bar across your input image. However, nearly any algorithm is going to have a hard time telling the difference between a sheet of white paper with a shadow of uneven light across it and the same sheet of paper with a grey sheet of paper laid on top of it...
In the images below, I have superimposed the histogram top right. In the first one, you can see the histogram is not narrow and bimodal because the dotted horizontal selection marquee is across the bar-code area of the image.
In the subsequent images, you can see a strong bimodal histogram because the dotted selection marquee is across a uniform area of image.
The first problem is in "visible luminosity". It me mean one of several things. This discussion should be a good start. (Yes, it has incomplete and contradictory answers, as well.)
Formula to determine brightness of RGB color
You should make sure you operate on the linear image which does not have any gamma correction applied to it. AFAIK Photoshop does not degamma and regamma images during filtering, which may produce erroneous results. It all depends on how accurate results you want. Photoshop wants things to look good, not be precise.
In principle you should first pick a formula to convert your RGB values to some luminosity value which fits your use. Then you have a single-channel image which you'll need to filter with a Gaussian filter, sliding average, or some other suitable filter. Unfortunately, this may require special tools as photoshop/gimp/etc. type programs tend to cut corners.
But then there is one thing you would probably like to consider. If you have an even brightness gradient across an image, the eye is happy and does not perceive it. Rather large differences go unnoticed if the contrast in the image is constant across the image. Unfortunately, the definition of contrast is not very meaningful if you do not know at least something about the content of the image. (If you have scanned/photographed documents, then the contrast is clearly between ink and paper.) In your sample image the brightness changes quite abruptly, which makes the change visible.
Just to show you how strange the human vision is in determining "brightness", see the classical checker shadow illusion:
http://en.wikipedia.org/wiki/Checker_shadow_illusion
So, my impression is that talking about the conversion formulae is probably the second or third step in the process of finding suitable image processing methods. The first step would be to try to define the problem in more detail. What do you want to accomplish?

I need help drawing sunrays, glimmers, bursts, sparkles, etc in C

I am in the process of learning how to create a lens flare application. I've got most of the basic components figured out and now I'm moving on to the more complicated ones such as the glimmers / glints / spikeball as seen here: http://wiki.nuaj.net/images/e/e1/OpticalFlaresLensObjects.png
Or these: http://ak3.picdn.net/shutterstock/videos/1996229/preview/stock-footage-blue-flare-rotate.jpg
Some have suggested creating particles that emanate outwards from the center while fading out and either increasing or decreasing in size but I've tried this and there are just too many nested loops which makes performance awful.
Someone else suggested drawing a circular gradient from center white to radius black and using some algorithms to lighten and darken areas thus producing rays.
Does anyone have any ideas? I'm really stuck on this one.
I am using a limited compiler that is similar to C but I don't have any access to antialiasing, predefined shapes, etc. Everything has to be hand-coded.
Any help would be greatly appreciated!
I would create large circle selections, then use a radial gradient. Each side of the gradient is white, but one side has 100% alpha and the other 0%. Once you have used the gradient tool to draw that gradient inside the circle. Deselect it and use the transform tool to Skew or in a sense smash it. Then duplicate it several times and turn each one creating a spiral or circle holding Ctrl to constrain when needed. Then once those several layers are in the rotation or design that you want. Group them in a folder and then you can further effect them all at once with another transform or skew. WHen you use these real smal, they are like little stars. But you can do many different things when creating each one to make them different. Like making each one lower in opacity than the last etc...
I found a few examples of how to do lens-flare 'via code'. Ideally you'd want to do this as a post-process - meaning after you're done with your regular render, you process the image further.
Fragment shaders are apt for this step. The easiest version I found is this one. The basic idea is to
Identify really bright spots in your image and potentially down sample it.
Shoot rays from the fragment to the center of the image and sample some pixels along the way.
Accumalate the samples and apply further processing - chromatic distortion etc - on it.
And you get a whole range of options to play with.
Another more common alternative seems to be
Have a set of basic images (circles, hexes) and render them as a bunch of bright objects, along the path from the camera to the light(s).
Composite this image on top of the regular render of you scene.
The problem is in determining when to turn on lens flare, since it is dependant on whether a light is visible/occluded from a camera. GPU Gems comes to rescue, with better options.
A more serious, physically based implementation is listed in this paper. This is a real-time version of making lens-flares, but you need a hardware that can support both vertex and geometry shaders.

Is StretchBlt HALFTONE == BILINEAR for all scaling?

Can anyone clarify if the GDI StretchBlt function for the workstation Win32 API performs bilinear interpolation for scaling to both larger and smaller images for 24/32-bit color images? And if not, is there a GDI (not GDI+) function that does this?
The SetStretchBltMode fn has a setting HALFTONE which is documented as follows:
HALFTONE
Maps pixels from the source rectangle into blocks of pixels in the destination rectangle. The average color over the destination block of pixels approximates the color of the source pixels.
I've seen references (see follow-up to first answer) that this performs bilinear interpolation when scaling down an image, but no clear answer of what happens when scaling up.
I have noticed that the Windows Mobile CE SDK does support a BILINEAR flag - which is documented exactly opposite of the HALFTONE comments (only works for scaling up).
Note that for the scope of this question, I'm not interested in pursuing GDI+ (which has numerous interpolation options), OpenGL, DirectX, etc. as alternatives, so please don't bother with follow-ups regarding these other APIs or alternate image libraries.
What I'm really hoping to find is some definitive MS/MSDN or other high-quality documentation that clearly documents this behavior of the Win32 (desktop) GDI behavior.
Meanwhile, I'll try some experiments comparing GDI vs. Direct2D (which does have an explicit flag to control this) and post my findings.
Thanks!
I've been looking into this same problem for the past couple of weeks.
As far as I can tell, there does not exist any definitive documentation on this behaviour from Microsoft.
However, I've run some tests myself, to try and establish the degree to which StretchBlt can be trusted to perform consistently with respect to up- and down-scaling images in halftone mode.
My findings are:
1) StretchBlt does produce adequate quality up- and down-scaled images. It might be a touch below Photoshop quality, but probably OK for most practical purposes.
2) It seems to depend upon hardware acceleration, whenever it's available. I haven't been able to confirm this, but I have a slight fear that this may lead to different outputs on different types of hardware. However, on the 5 or 6 different systems I've tried it on, old and new, the performance has been consistent and fast.
3) If you use the call on a 16-bit color device, or lower, StretchBlt will automatically dither your image. If you run it on a 24-bit color device, it will not dither.
4) If you use it to scale small images (smaller than 150x150px), it will randomly fall back to nearest neighbour interpolation. This can be remedied in your own software, by padding the bitmap before scaling, doing StretchBlt on it, and then removing the padding afterwards. Kind of a hack, but it works.
HALFTONE mode performs a very blocky halftone dithering on the image, based on varying the conversion thresholds over a defined square. I have never seen a situation where it would be considered the best choice.
COLORONCOLOR is the best mode for color images, but as you've seen it doesn't give great results.
GDI does not support a bilinear mode (except in Windows Mobile CE as you discovered). The naive implementation of bilinear does not do very well when shrinking an image, as it simply tries to interpolate between two adjacent input pixels without trying to draw from a larger area.

Resources