Removing shadows from white clear surface - algorithm

I have an image of an object taken in a studio. The image is well lighten from multiple sources and stands on a mate white background. the background is also lighten.
Most of the shadows that fall on the background are eliminated by the lights but there are still very little light shadows that I would like to remove.
Until now, the only solutions I found involved in manual intervention. I would like to know if there are known methods for this or if anybody has an idea how to approach such a problem.
The object can also contain white elements and at this point I can't change the background color (to green or blue).
Thanks.

If you have strong contrast between foreground and background you could use a simple floodfill algorithm that stops on hitting a large contrast difference to classify pixels as background and foreground. Then just adjust levels of background to saturate shadows to white while retaining somewhat reasonable edge quality. It helps if your input data is significantly higher resolution than the output. If you have soft edges or just need really good edge quality you'll need to employ an algorithm that for each edge pixel estimates background color, foreground color and transparency. A good approach is the Soft Scissors paper from SIGGRAPH 2007.

Related

How to implement a procedure of calibration red and cyan colors of monitor for concrete red-cyan anaglyph glasses?

I am developing an application for treatment of children. It must show different images for left and right eyes. I decided to use cheap red-cyan glasses for separating the fields of view of the eyes. The first eye will see only red images the second one - only cyan.
The problem is that colors on monitor are not really red and cyan. Also glasses are not ideal. I need to implement the calibration procedure for searching the best red and cyan colors for current monitor and glasses. I mean I need to change white (color of background), red and cyan classes on some more suitable colors to make red and cyan colors visible only for one eye.
Does anybody know any algorithms for calibrating anaglyph colors? I think I need to implement a special UI for calibrating colors. I am developing an application for iOS and Android.
You obviously lack the background knowledge.
Monitors
Nowadays are used mostly LCDs these emits 3 basic wavelength bands (R,G,B). Red and green have fairly sharp spectra but blue is relatively wide. It also emits cyan and orange wavelength bands (not as sharp as R,G but sharper then B).
I suspect these two are from back-light (an is present on all devices I measured even phones)
Anaglyph glasses
these are band filters so they block all wavelength out of their range up to a scale
spectra
This is how it looks like (White on my LCD):
and how I see/interpret it:
bands are approximate (I have just homemade spectroscope with nonlinear scale and not spectrograph) and I am unable to take clear image of the spectra (have only automatic cameras). The back-light residue is blocked by my glasses completely and even the cyan filter passes it but it lower the brightness to point that is unseen to me on mine current LCD brightness settings.
calibration
The wavelengths you can use are just R,G,B (no matter the color).
Color is not the same as wavelength it is just subjective Human Perception not a physical variable !!!
so the color is irrelevant just filter image for one eye by setting all pixels with R,G only and the other one with R only and merge them together.
The only thing to calibrate is brightness. The filters in glasses should have the same blocking properties but the cheap one usually have not. That means one eye is getting different brightness then the other one which can cause discomfort so you can multiply pixels by the brightness (separate value for left and right eye). This is the only thing to calibrate the less quality filters the darker image you need.
anaglyphs colors
you can use B/W images these are most comfortable to look at. You can use Color images too but for some colors (like blue water) is this uncomfortable because one eye see it and the other not. Brain computes the rest but the feeling is uncomfortable after time. It is similar to hearing music that is off key.
It can be helped by adding white-ish component to such color but that will lose the color correctness of image it depends on what you need to do ...
anaglyphs eye distance
I am from central Europe so all data bellow are from that region !!!
average distance of human eyes view axises is 6.5 cm.
male horizontal FOV angle is 90 degree (including peripheral vision)
male horizontal FOV angle is 60 degree (excluding peripheral vision)
so if your anaglyph render has real sizes then set the FOV and cameras distances accordingly. If not then you also should add horizontal camera distance to calibration because the depth perception is affected also by:
the distance of viewer to monitor
their subjective depth perception
the scale the objects are rendered (also the monitor/image size)

When using Photoshop/GIMP is it better to color->alpha and then resize, or vice versa?

I was making a circular icon with semi-transparency, so I started with a large filled-in circle with a black border, then I did white->alpha, and resized the image to my required size. Would it have made a difference if I resized first, and then did white->alpha?
Thanks.
Yes.
In general, whenever you are re-sampling, this will have an impact if you are using any anti-aliasing, or the resampling algorithm is something other than nearest-neighbor.
Try the following exercise for a visual example:
In both cases, create your circular icon.
Case 1:
Change white-center of the circle to alpha (0%, fully transparent).
Re-sample (ie: down-sample to 25%) the entire image using something other than nearest neighbour (ie: actually use antialiasing of some sort)
Paste a copy of the result over a red background.
You should only see black and red colors inside the circle when you zoom in, with a smooth transition from black-to-red.
Case 2:
Re-sample (ie: down-sample to 25%) the entire image using something other than nearest neighbour (ie: actually use antialiasing of some sort)
Change white-center of the circle to alpha (0%, fully transparent).
Paste a copy of the result over a red background.
You should see a black outer circle, with a bit of a white halo inside of it, then the red center, with a smooth black-to-white transition, and a sharp white-to-red transition. This will depend on the aggressiveness factor you set with the magic-wand tool you are likely using to auto-select the region you want to modify the alpha properties of.
Now repeat case 2, but disable any sort of anti-aliasing, and enforce the use of a nearest neighbour algorithm rather than bi-cubic spline, Hermite, Gaussian, etc. Your results will look very similar to case 1, except you won't see the smooth transition from black-to-red when you zoom in, you will just see a sharp black-to-red transition.
In general, you will get the best subjective quality when working on your images first, then re-sampling later. If you paste it as its own layer, then you still have all the image data available any none is lost, the image is just rendered smaller.

How to draw Bézier curves with Quartz with a quality equivalent to the default implementation?

I am drawing Bézier curves with my own code. Basically, I am computing a large number of points which I join with a CGPath. But even with the same line width, I can't achieve the same quality as the default implementation. The edge of the stroke is a bit blurry due to anti-aliasing. The stroke is not bad looking, but I notice that on Apple rendering, the anti-aliasing looks different; the width of the antialiasing zone (where the pixels are neither of the color of the stroke, nor the the color of the background) is lesser.
Spying a little with Instruments shows that UIBezierPath's stroke spends some time in libRIP, but I can't find what it is exactly.
I have drawn an ellipse myself, using sin, and cos operations. My ellipse looks as good as the original.
I know that there is a problem when drawing over an gradient. Is that the case in your case?
If you're plotting a large number of points, you might be getting darker anti-aliasing edges because you're over-plotting. I can't find a good image online to describe it, but if you plot from A -> B and then A -> B again, both lines might generate, say, 30% gray partial pixels, which will then be blended together resulting in 60% gray.
You may not think you're doing that, but if you have a high density of points then you might be ending up with overlapping components (and especially if you are plotting separate line segments you'll have your capping etc. overlapping).
Probably worth posting some sample code and screenshots taken with Pixie or xScope so we can see what's happening.

Algorithm for "neon glow" graphics programming

I am searching for an article or tutorial that explains how one can draw primitive shapes (mainly simple lines) with a (neon) glow effect on them in the graphical output of a computer program. I do not want to do some sophisticated stuff like for example in modern first pirson shooters or alike. I am more in a search for a simple solution, like the lines in that picture: http://tjl.co/blog/wp-content/uploads/2009/05/NeonStripes.jpg -- but of course drawn by a computer program in my case.
The whole thing should run on a modern smart phone, so the hardware is a bit limited.
I do know a bit about OpenGL, but not too much, so unfortunately I am a bit lost here. Did some research on Google ("glow effect algoritm" and similar), but found either highly complex stuff for 3D games, or tutorials for Photoshop & co.
So what I would really need is an in-depth article on that subject, but not on a very advanced level. I hope thats even possible... I have just started with OpenGL, did some minor graphics programming in the past, but I am a long-year programmer now, so I would understand technical papers in general.
Does anyone of you know of such an article/paper/tutorial/anything?
Thanks in advance for all good advices!
Cheers!
Matthias
Its jus a bunch of lines with different brightness/transperency. Basically, if you want a glow effect for 1px line, in a size of 20 pixels, then you draw 41 lines with width of 1 px. The middle line is with your base colour, other lines get colours that gradiently go from base color to 100% transperency (like in your example) or darkest colour variant (if you have black background, no transparency).
That is it. :)
This isn't something I've ever done, but looking at your example, the basic approach I'd use to try and recreate it would be...
Start with an algorithm for drawing a filled shape large enough to include the original shape and the glow. For example, a rectangle becomes a slightly larger rectangle, but with rounded corners. An infinitessimally-wide line becomes a thickened line with semi-circular caps. Subtract out the original shape (and fill the pixels for that normally).
For each pixel in the glow, the colour depends on the shortest distance to any part of the original shape. This normally reduces to the distance to the nearest point on a line (e.g. one edge of a rectangle).
The distance is translated to a colour value using probably Hue-Saturation-Value or a similar colour scheme, as well as reducing alpha (increasing transparency). For neon glows, you probably want constant hue, decreasing brightness, maybe increasing saturation, and decreasing alpha.
Translate the HSV/whatever colour value to RGB for output. See this question.
EDIT - I should probably have said HSL rather than HSV - in HSL, if L is at it's maximum value, the resulting colour is always white. For HSV, that's only true if saturation is also at zero. See http://en.wikipedia.org/wiki/HSL_and_HSV
The real trick is that even on a phone these days, I'd guess you probably should use hardware (shaders) for this - sorry, I don't know how that's done.
The "painters algorithm" overlaying of gradually smaller shapes that others have described here is also a possibility, but (1) possibly slower, depending on implementation issues, and (2) you may need to draw to an off-screen buffer, with some special handling for the alpha channel, then blit back to the screen to handle the transparency correctly - if you need transparency, that is.
EDIT - Silly me. An alternative approach is to apply a blur to your original shape (in greyscale), but instead of writing out the blurred pixels directly, apply the colour-transformation to each blurred pixel value.
A blur is basically a weighted moving average. Technically, a finite impulse response filter is implemented using a convolution, but the maths for that is a tad awkward and if you just want "a blur" of about the right size, draw a grayscale circle of pixels as your "weights" image.
The blur in this case basically replaces the distance-from-shape calculation.
_____________________
| |
----|---------------------|-----> line
|_____________________|
gradient block
Break up your line into small non-overlapping blocks. Use whatever graphics primitive you have to draw a tilted rectangular gradient: the center is at 100% and the outer edge is at 0%.
Don't draw it on the image yet; you want to blend it with the image. Using regular transparency will just make it look like a random pipe or pole or something (unless you draw a white line, and your background is dark).
Here are two choices of blending mode:
color dodge: [blended pixel value] = (1-[overlay's pixel value]) / [bottom pixel value]
linear dodge: [blended pixel value] = max([overlay's pixel value]+[bottom pixel value], 1)
Then draw the line above the glow.
If you want to draw a curved "neon" line, simply draw it as a sequence of superimposed "neon dots" where each "neon dot" is a small circular image with transparency going from 0% at the origin to 100% at the edge of the circle.

Edge Detection and transparency

Using images of articles of clothing taken against a consistent background, I would like to make all pixels in the image transparent except for the clothing. What is the best way to go about this? I have researched the algorithms that are common for this and the open source library opencv. Aside from rolling my own or using opencv is there an easy way to do this? I am open to any language or platform.
Thanks
If your background is consistend in an image but inconsistent across images it could get tricky, but here is what I would do:
Separate the image into some intensity/colour form such as YUV or Lab.
Make a histogram over the colour part. Find the most occuring colour, this is (most likely) your background (update) maybe a better trick here would be to find the most occuring colour of all pixels within one or two pixels from the edge of the image.
Starting from the eddges of the image, set all pixels that have that colour and are connected to the edge through pixels of that colour to transparent.
The edge of the piece of clothing is now going to look a bit ugly because it consist of pixels that gain their colour from both the background and the piece of clothing. To combat this you need to do a bit more work:
Find the edge of the piece of clothing through some edge detection mechanism.
Replace the colour of the edge pixels with a blend of the colour just "inside" the edge pixel (i.e. the colour of the clothing in that region) and transparent (if your output image format supports that).
If you want to get really fancy, you increase the transparency depending on how much "like" the background colour the colour of that pixel is.
Basically, find the color of the background and subtract it, but I guess you knew this. It's a little tricky to do this all automatically, but it seems possible.
First, take a look at blob detection with OpenCV and see if this is basically done for you.
To do it yourself:
find the background: There are several options. Probably easiest is to histogram the image, and the large number of pixels with similar values are the background, and if there are two large collections, the background will be the one with a big hole in the middle. Another approach is to take a band around the perimeter as the background color, but this seems inferior as, for example, reflection from a flash could dramatically brighten more centrally located background pixels.
remove the background: a first take at this would be to threshold the image based on the background color, and then run the "open" or "close" algorithms on this, and then use this as a mask to select your clothing article. (The point of open/close is to not remove small background colored items on the clothing, like black buttons on a white blouse, or, say, bright reflections on black clothing.)
OpenCV is a good tool for this.
The trickiest part of this will probably be at the shadow around the object (e.g. a black jacket on a white background will have a continuous gray shadow at some of the edges and where to make this cut?), but if you get this far, post another question.
if you know the exact color intensity of the background and it will never change and the articles of clothing will never coincide with this color, then this is a simple application of background subtraction, that is everything that is not a particular color intensity is considered an "on" pixel, one of interest. You can then use connected component labeling (http://en.wikipedia.org/wiki/Connected_Component_Labeling) to figure out seperate groupings of objects.
for a color image, with the same background on every pictures:
convert your image to HSV or HSL
determine the Hue value of the background (+/-10): do this step once, using photoshop for example, then use the same value on all your pictures.
perform a color threshold: on the hue channel exclude the hue of the background ([0,hue[ + ]hue, 255] typically), for all other channels include the whole value range (0 to 255 typically). this will select pixels which are NOT the background.
perform a "fill holes" operation (normally found along blob analysis or labelling functions) to complete the part of the clothes which may have been of the same color than the background.
now you have an image which is a "mask" of the clothes: non-zero pixels represents the clothes, 0 pixels represents the background.
this step of the processing depends on how you want to make pixels transparent: typically, if you save your image as PNG with an alpha (transparency) channel, use a logical AND (also called "masking") operation between the alpha channel of the original image and the mask build in the previous step.
voilà, the background disappeared, save the resulting image.

Resources