Compositing layer styles - algorithm

I am trying to implement "Inner shadow" style from Adobe Photoshop.
I have 3 RGBA layers: source layer (brown), inner shadow layer (white) and background layer. They can have Photoshop-like blend modes (Normal, Multiply, Color Burn ...) - blending is not associative!
I would like to blend them together like a layer style in Photoshop. When I multiply Shadow alpha by source alpha and blend (shadow Over (source Over background)), I am getting dark contours around the object, where source alpha is between 0 and 1.
Photoshop reference is on the left, my result is on the right.
The same problem would be with "Color overlay" and many other styles. Do you know how to do that correctly - avoid contours?

I found an answer to this problem in the specification of PDF format 1.7, at page 339.
So, compositing (shadow with (source with background)) is wrong. The right way to do it is:
Composite the source with background into temporary channels C, disregarding the source’s alpha and using alpha value of 1.0 everywhere.
Composite the (uncropped) shadow with C into C in standard way.
Compute a weighted average of C with the background into background, using the source alpha
as the weighting factor.
As you see, shadow was blended both with source and background. Weighted average was the function I was looking for.

Related

HTML5 canvas - make a mono mask efficiently without antialiasing

I am using pixel colour inspection to detect collisions. I know there are other ways to achieve this but this is my use case.
I draw a shape cloned from the main canvas on to a second canvas switching the fill and stroke colours to pur black. I then use getImageData() to get an array of pixel colours and inspect them - if I see black I have a collision with something.
However, some pixels are shades of grey because the second canvas is applying antialiasing to the shape. I want only black or transparent pixels.
How can I get the second canvas to be composed of either transparent or black only?
I have achieved this long in the past with Windows GDI via compositing/xor combinations etc. However, GDI did not always apply antialiasing. I guess the answer lies in globalCompositeOperation or filter but I cannot see what settings/filters or sequence to apply.
I appreciate I have not provided sample code but I am hoping that someone can throw me a bone and I'll work up a snippet here which might become a standard cut & paste for posterity from that.

Blending two sprites in OpenGL ES without affecting background

Basically what I want to achieve is a sprite highlight animation effect as displayed below.
The idea is that the white-translucent gradient sprite moves on top of the other sprite (left-to-right), using a blend mode like Overlay (Photoshop). The difficult part is that the top gradient sprite should only be drawn on the visible pixels of the sprite underneath. The other part of the gradient overlay should be discarded to not affect the background or other sprites underneath (like on the image to the far right).
Is it possible to achieve that effect with a clever combination of OpenGL blend modes and how, or would I have to create a custom shader to combine these sprites?
Background: I'm using libgdx with OpenGL ES 2.0 and the app runs on Desktop, Android and iOS.
There arę many ways to do it. The simplest one:
You should render button and hilight in a single pass. In fragment shader, after sampling button texture and hilight texture calc the output color as for blending (could be mix(c1,c2,c2.a)) and alpha as button texture alpha only. Of course enable blending in usual way: (srcalpha,1-srcalpha)

Image Effect with Dark Borders

I was creating an effects library for a PhotoBooth App. I have created effects like Black/White, Vintage, Sepia, Retro etc. etc.
I wanted to create a few effects now in which I wanted to have a Dark Border at the edges which kind of form a frame for the image .. something like this -> Example Effect
How can I do this using Pixel Bender and Flash ?
The effect you are describing is called vignetting. It is basically just darkening the pixels with some weight that changes depending on distance from the center of the image. In image editing it corresponds to overlaying the image with black color and applying a circular or elliptic mask to it, for example:
(source: johnhpanos.com)
You can do this by several methods depending on how you operate with image and its pixels. For example by multiplying the pixels by a weight coefficient that is smaller when closer to the center and bigger when farther away from it. The distance can be calculated from the difference between pixel coordinates.

Sprite quads & depth testing correctly in OpenGL ES 2

I am trying to render 2D (flat) sprites in a 3D environment using OpenGL ES 2. The way I create each sprite is pretty standard: I create a quad consisting of two triangles, and I map the texture onto that. Everything works fine, except I noticed something strange: when depth testing is turned on (which it should be in 3D mode), the corners of my sprites are painted using the background color.
The easiest way to show this is by illustration:
When I turn off depth testing (on the left) it looks fine, but when I turn it on (on the right) you can see the green sprite's rectangle overlapping on top of the yellow sprite. They both use the same code, the same PNG file, the same shader. Everything is the same except depth testing.
I'm hoping someone might know a way to work around this.
What you can do is alpha testing. Basically your texture has to have an alpha value of 0 where it should be transparent (which it may already have). Then you configure alpha test like e.g.
glAlphaFunc(GL_GREATER, 0.5f);
glEnable(GL_ALPHA_TEST);
This way every pixel (or better fragment) with an alpha value <= 0.5 will not be written into the framebuffer (and therefore not into the depth buffer). You can also do the alpha test yourself in the fragment shader by just discarding the fragment:
...
if(color.a < 0.5)
discard;
...
Then you don't need the fixed-function alpha test (I think that is the reason why it is deprecated in modern desktop GL, don't know about ES).
EDIT: After looking into the ES 2.0 spec, it seems there is no fixed-function alpha test any more, so you will have to do it in the fragment shader like written above. This way you can also make it dependent on a specific color or any other computable property instead of the alpha channel.

Edge Detection and transparency

Using images of articles of clothing taken against a consistent background, I would like to make all pixels in the image transparent except for the clothing. What is the best way to go about this? I have researched the algorithms that are common for this and the open source library opencv. Aside from rolling my own or using opencv is there an easy way to do this? I am open to any language or platform.
Thanks
If your background is consistend in an image but inconsistent across images it could get tricky, but here is what I would do:
Separate the image into some intensity/colour form such as YUV or Lab.
Make a histogram over the colour part. Find the most occuring colour, this is (most likely) your background (update) maybe a better trick here would be to find the most occuring colour of all pixels within one or two pixels from the edge of the image.
Starting from the eddges of the image, set all pixels that have that colour and are connected to the edge through pixels of that colour to transparent.
The edge of the piece of clothing is now going to look a bit ugly because it consist of pixels that gain their colour from both the background and the piece of clothing. To combat this you need to do a bit more work:
Find the edge of the piece of clothing through some edge detection mechanism.
Replace the colour of the edge pixels with a blend of the colour just "inside" the edge pixel (i.e. the colour of the clothing in that region) and transparent (if your output image format supports that).
If you want to get really fancy, you increase the transparency depending on how much "like" the background colour the colour of that pixel is.
Basically, find the color of the background and subtract it, but I guess you knew this. It's a little tricky to do this all automatically, but it seems possible.
First, take a look at blob detection with OpenCV and see if this is basically done for you.
To do it yourself:
find the background: There are several options. Probably easiest is to histogram the image, and the large number of pixels with similar values are the background, and if there are two large collections, the background will be the one with a big hole in the middle. Another approach is to take a band around the perimeter as the background color, but this seems inferior as, for example, reflection from a flash could dramatically brighten more centrally located background pixels.
remove the background: a first take at this would be to threshold the image based on the background color, and then run the "open" or "close" algorithms on this, and then use this as a mask to select your clothing article. (The point of open/close is to not remove small background colored items on the clothing, like black buttons on a white blouse, or, say, bright reflections on black clothing.)
OpenCV is a good tool for this.
The trickiest part of this will probably be at the shadow around the object (e.g. a black jacket on a white background will have a continuous gray shadow at some of the edges and where to make this cut?), but if you get this far, post another question.
if you know the exact color intensity of the background and it will never change and the articles of clothing will never coincide with this color, then this is a simple application of background subtraction, that is everything that is not a particular color intensity is considered an "on" pixel, one of interest. You can then use connected component labeling (http://en.wikipedia.org/wiki/Connected_Component_Labeling) to figure out seperate groupings of objects.
for a color image, with the same background on every pictures:
convert your image to HSV or HSL
determine the Hue value of the background (+/-10): do this step once, using photoshop for example, then use the same value on all your pictures.
perform a color threshold: on the hue channel exclude the hue of the background ([0,hue[ + ]hue, 255] typically), for all other channels include the whole value range (0 to 255 typically). this will select pixels which are NOT the background.
perform a "fill holes" operation (normally found along blob analysis or labelling functions) to complete the part of the clothes which may have been of the same color than the background.
now you have an image which is a "mask" of the clothes: non-zero pixels represents the clothes, 0 pixels represents the background.
this step of the processing depends on how you want to make pixels transparent: typically, if you save your image as PNG with an alpha (transparency) channel, use a logical AND (also called "masking") operation between the alpha channel of the original image and the mask build in the previous step.
voilà, the background disappeared, save the resulting image.

Resources