Anti Aliasing - Alpha Image [Unity3D] - image

I have a system that removes the colour white (give or take a few shades), from an image and replaces it with an alpha channel. (The image is taken from the users phone camera, and tries to remove selected colouring)
This leaves harsh edges most of the time, and I want to know if it is possible to add some type of anti-aliasing on top.
The system works by taking in the image, and searching through each pixel data. If the pixel is white (or close), it will replace it with an alpha colour.
So I guess my question is, how do I make the edges less harsh. Thanks.

Anti aliasing is not what you are looking for. This takes care of effects caused by the limited resolution of your image. However, your problem is not related to resolution, you would still have it with infinite resolution.
What you need to do is when you find a white pixel, increase the transparency of the pixel itself and the pixels around it.
You can just include the four pixels immediately above, below, left or right of your white pixel, or you an choose any other shape, e.g. all pixels which lie inside a circle of given radius around the white pixel.
Also you can choose a function which determines how transparency is distributed over that shape. You can make everything half-transparent or you can decrease the effect towards the edges of that shape (though I don't think that this will be necessary).
Thus each pixel will receive transparency from several pixels around them. The resulting transparency must be computed from all these contributions. Simply multiplying them probably won't do, because you will have a hard time ever reaching alpha=0. You may however, interpret (255-alpha) as a measure of transparency, add all contributing transparencies and then convert back into alpha. Something like max (0, 255 - (255-a1) + (255-a2) ...).
It will be difficult to do this in-place, i.e. with just ony copy of the image. You might need an intermediate "image", where each pixel is associated with all transparency contributions from the pixels around it.

Related

Crop to exclude any transparency on a ragged edge

I have a square image with a ragged edge: the transparent pixels outside the image "weave" in and out towards the image center, within some unknown range. This range may be different for each side.
Is there an algorithm that would crop the image to the largest size possible with no transparent pixels remaining? I can think of an iterative one: start with a small cropping square in the center. If no transparent pixels are detected, start again but enlarge the cropping square by 1 pixel. Then repeat. Once you detect transparent pixels after cropping, go back one step and save the result.
There is an obvious algorithm that comes to mind:
Find y* = min_y {(x,y) : P(x,y) is transparent }, where P(x,y) is the pixel at coord (x,y) then crop the image [0,y*] (assuming the image starts at zero at the bottom, and that transparent pixels always happen at the top of the image.)
Note that this algorithm has serious downsides, if y* happens to be very close to 0 because of an errant transparent pixel, you will end up cropping almost your entire image.
If you want a more robust solution, I believe you will have to frame this problem as an optimization problem and solve it, allowing for some errant transparent pixels to be masked instead of cropped. The algorithm that would do well for you would be an energy-based formulation which could be solved using graph-cuts. For example, see the GrabCut algorithm.
If you know your requirements and how bad your data can get you can make a judgment call for how involved you want to make your solution, but at this point I would highly recommend clarifying the requirements of your solution as well as how bad your data can get.

Any ideas on how to remove small abandoned pixel in a png using OpenCV or other algorithm?

I got a png image like this:
The blue color is represent transparent. And all the circle is a pixel group. So, I would like to find the biggest one, and remove all the small pixel, which is not group with the biggest one. In this example, the biggest one is red colour circle, and I will retain it. But the green and yellow are to small, so I will remove them. After that, I will have something like this:
Any ideas? Thanks.
If you consider only the size of objects, use the following algorithm: labellize the connex components of the mask image of the objects (all object pixels are white, transparent ones are black). Then compute the areas of the connex components, and filter them. At this step, you have a label map and a list of authorized labels. You can read the label map and overwrite the mask image with setting every pixel to white if it has an authorized label.
OpenCV does not seem to have a labelling function, but cvFloodFill can do the same thing with several calls: for each unlabeled white pixel, call FloodFill with this pixel as marker. Then you can store the result of this step in an array (of the size of the image) by assigning each newly assigned pixel with its label. Repeat this as long as you have unlabellized pixels.
Else you can recode the connex component function for binary images, this algorithm is well known and easy to implement (maybe start with Matlab's bwlabel).
The handiest way to filter objects if you have an a priori knowledge of their size is to use morphological operators. In your case, with opencv, once you've loaded your image (OpenCV supports PNG), you have to do an "openning", that is an erosion followed by a dilation.
The small objects (smaller than the size of the structuring element you chose) will disappear with erosion, while the bigger will remain and be restored with the dilation.
(reference here, cv::morphologyEx).
The shape of the big object might be altered. If you're only doing detection, it is harmless, but if you want your object to avoid transformation you'll need to apply a "top hat" transform.

When using Photoshop/GIMP is it better to color->alpha and then resize, or vice versa?

I was making a circular icon with semi-transparency, so I started with a large filled-in circle with a black border, then I did white->alpha, and resized the image to my required size. Would it have made a difference if I resized first, and then did white->alpha?
Thanks.
Yes.
In general, whenever you are re-sampling, this will have an impact if you are using any anti-aliasing, or the resampling algorithm is something other than nearest-neighbor.
Try the following exercise for a visual example:
In both cases, create your circular icon.
Case 1:
Change white-center of the circle to alpha (0%, fully transparent).
Re-sample (ie: down-sample to 25%) the entire image using something other than nearest neighbour (ie: actually use antialiasing of some sort)
Paste a copy of the result over a red background.
You should only see black and red colors inside the circle when you zoom in, with a smooth transition from black-to-red.
Case 2:
Re-sample (ie: down-sample to 25%) the entire image using something other than nearest neighbour (ie: actually use antialiasing of some sort)
Change white-center of the circle to alpha (0%, fully transparent).
Paste a copy of the result over a red background.
You should see a black outer circle, with a bit of a white halo inside of it, then the red center, with a smooth black-to-white transition, and a sharp white-to-red transition. This will depend on the aggressiveness factor you set with the magic-wand tool you are likely using to auto-select the region you want to modify the alpha properties of.
Now repeat case 2, but disable any sort of anti-aliasing, and enforce the use of a nearest neighbour algorithm rather than bi-cubic spline, Hermite, Gaussian, etc. Your results will look very similar to case 1, except you won't see the smooth transition from black-to-red when you zoom in, you will just see a sharp black-to-red transition.
In general, you will get the best subjective quality when working on your images first, then re-sampling later. If you paste it as its own layer, then you still have all the image data available any none is lost, the image is just rendered smaller.

Algorithm for "neon glow" graphics programming

I am searching for an article or tutorial that explains how one can draw primitive shapes (mainly simple lines) with a (neon) glow effect on them in the graphical output of a computer program. I do not want to do some sophisticated stuff like for example in modern first pirson shooters or alike. I am more in a search for a simple solution, like the lines in that picture: http://tjl.co/blog/wp-content/uploads/2009/05/NeonStripes.jpg -- but of course drawn by a computer program in my case.
The whole thing should run on a modern smart phone, so the hardware is a bit limited.
I do know a bit about OpenGL, but not too much, so unfortunately I am a bit lost here. Did some research on Google ("glow effect algoritm" and similar), but found either highly complex stuff for 3D games, or tutorials for Photoshop & co.
So what I would really need is an in-depth article on that subject, but not on a very advanced level. I hope thats even possible... I have just started with OpenGL, did some minor graphics programming in the past, but I am a long-year programmer now, so I would understand technical papers in general.
Does anyone of you know of such an article/paper/tutorial/anything?
Thanks in advance for all good advices!
Cheers!
Matthias
Its jus a bunch of lines with different brightness/transperency. Basically, if you want a glow effect for 1px line, in a size of 20 pixels, then you draw 41 lines with width of 1 px. The middle line is with your base colour, other lines get colours that gradiently go from base color to 100% transperency (like in your example) or darkest colour variant (if you have black background, no transparency).
That is it. :)
This isn't something I've ever done, but looking at your example, the basic approach I'd use to try and recreate it would be...
Start with an algorithm for drawing a filled shape large enough to include the original shape and the glow. For example, a rectangle becomes a slightly larger rectangle, but with rounded corners. An infinitessimally-wide line becomes a thickened line with semi-circular caps. Subtract out the original shape (and fill the pixels for that normally).
For each pixel in the glow, the colour depends on the shortest distance to any part of the original shape. This normally reduces to the distance to the nearest point on a line (e.g. one edge of a rectangle).
The distance is translated to a colour value using probably Hue-Saturation-Value or a similar colour scheme, as well as reducing alpha (increasing transparency). For neon glows, you probably want constant hue, decreasing brightness, maybe increasing saturation, and decreasing alpha.
Translate the HSV/whatever colour value to RGB for output. See this question.
EDIT - I should probably have said HSL rather than HSV - in HSL, if L is at it's maximum value, the resulting colour is always white. For HSV, that's only true if saturation is also at zero. See http://en.wikipedia.org/wiki/HSL_and_HSV
The real trick is that even on a phone these days, I'd guess you probably should use hardware (shaders) for this - sorry, I don't know how that's done.
The "painters algorithm" overlaying of gradually smaller shapes that others have described here is also a possibility, but (1) possibly slower, depending on implementation issues, and (2) you may need to draw to an off-screen buffer, with some special handling for the alpha channel, then blit back to the screen to handle the transparency correctly - if you need transparency, that is.
EDIT - Silly me. An alternative approach is to apply a blur to your original shape (in greyscale), but instead of writing out the blurred pixels directly, apply the colour-transformation to each blurred pixel value.
A blur is basically a weighted moving average. Technically, a finite impulse response filter is implemented using a convolution, but the maths for that is a tad awkward and if you just want "a blur" of about the right size, draw a grayscale circle of pixels as your "weights" image.
The blur in this case basically replaces the distance-from-shape calculation.
_____________________
| |
----|---------------------|-----> line
|_____________________|
gradient block
Break up your line into small non-overlapping blocks. Use whatever graphics primitive you have to draw a tilted rectangular gradient: the center is at 100% and the outer edge is at 0%.
Don't draw it on the image yet; you want to blend it with the image. Using regular transparency will just make it look like a random pipe or pole or something (unless you draw a white line, and your background is dark).
Here are two choices of blending mode:
color dodge: [blended pixel value] = (1-[overlay's pixel value]) / [bottom pixel value]
linear dodge: [blended pixel value] = max([overlay's pixel value]+[bottom pixel value], 1)
Then draw the line above the glow.
If you want to draw a curved "neon" line, simply draw it as a sequence of superimposed "neon dots" where each "neon dot" is a small circular image with transparency going from 0% at the origin to 100% at the edge of the circle.

Edge Detection and transparency

Using images of articles of clothing taken against a consistent background, I would like to make all pixels in the image transparent except for the clothing. What is the best way to go about this? I have researched the algorithms that are common for this and the open source library opencv. Aside from rolling my own or using opencv is there an easy way to do this? I am open to any language or platform.
Thanks
If your background is consistend in an image but inconsistent across images it could get tricky, but here is what I would do:
Separate the image into some intensity/colour form such as YUV or Lab.
Make a histogram over the colour part. Find the most occuring colour, this is (most likely) your background (update) maybe a better trick here would be to find the most occuring colour of all pixels within one or two pixels from the edge of the image.
Starting from the eddges of the image, set all pixels that have that colour and are connected to the edge through pixels of that colour to transparent.
The edge of the piece of clothing is now going to look a bit ugly because it consist of pixels that gain their colour from both the background and the piece of clothing. To combat this you need to do a bit more work:
Find the edge of the piece of clothing through some edge detection mechanism.
Replace the colour of the edge pixels with a blend of the colour just "inside" the edge pixel (i.e. the colour of the clothing in that region) and transparent (if your output image format supports that).
If you want to get really fancy, you increase the transparency depending on how much "like" the background colour the colour of that pixel is.
Basically, find the color of the background and subtract it, but I guess you knew this. It's a little tricky to do this all automatically, but it seems possible.
First, take a look at blob detection with OpenCV and see if this is basically done for you.
To do it yourself:
find the background: There are several options. Probably easiest is to histogram the image, and the large number of pixels with similar values are the background, and if there are two large collections, the background will be the one with a big hole in the middle. Another approach is to take a band around the perimeter as the background color, but this seems inferior as, for example, reflection from a flash could dramatically brighten more centrally located background pixels.
remove the background: a first take at this would be to threshold the image based on the background color, and then run the "open" or "close" algorithms on this, and then use this as a mask to select your clothing article. (The point of open/close is to not remove small background colored items on the clothing, like black buttons on a white blouse, or, say, bright reflections on black clothing.)
OpenCV is a good tool for this.
The trickiest part of this will probably be at the shadow around the object (e.g. a black jacket on a white background will have a continuous gray shadow at some of the edges and where to make this cut?), but if you get this far, post another question.
if you know the exact color intensity of the background and it will never change and the articles of clothing will never coincide with this color, then this is a simple application of background subtraction, that is everything that is not a particular color intensity is considered an "on" pixel, one of interest. You can then use connected component labeling (http://en.wikipedia.org/wiki/Connected_Component_Labeling) to figure out seperate groupings of objects.
for a color image, with the same background on every pictures:
convert your image to HSV or HSL
determine the Hue value of the background (+/-10): do this step once, using photoshop for example, then use the same value on all your pictures.
perform a color threshold: on the hue channel exclude the hue of the background ([0,hue[ + ]hue, 255] typically), for all other channels include the whole value range (0 to 255 typically). this will select pixels which are NOT the background.
perform a "fill holes" operation (normally found along blob analysis or labelling functions) to complete the part of the clothes which may have been of the same color than the background.
now you have an image which is a "mask" of the clothes: non-zero pixels represents the clothes, 0 pixels represents the background.
this step of the processing depends on how you want to make pixels transparent: typically, if you save your image as PNG with an alpha (transparency) channel, use a logical AND (also called "masking") operation between the alpha channel of the original image and the mask build in the previous step.
voilĂ , the background disappeared, save the resulting image.

Resources