How to implement a procedure of calibration red and cyan colors of monitor for concrete red-cyan anaglyph glasses? - algorithm

I am developing an application for treatment of children. It must show different images for left and right eyes. I decided to use cheap red-cyan glasses for separating the fields of view of the eyes. The first eye will see only red images the second one - only cyan.
The problem is that colors on monitor are not really red and cyan. Also glasses are not ideal. I need to implement the calibration procedure for searching the best red and cyan colors for current monitor and glasses. I mean I need to change white (color of background), red and cyan classes on some more suitable colors to make red and cyan colors visible only for one eye.
Does anybody know any algorithms for calibrating anaglyph colors? I think I need to implement a special UI for calibrating colors. I am developing an application for iOS and Android.

You obviously lack the background knowledge.
Monitors
Nowadays are used mostly LCDs these emits 3 basic wavelength bands (R,G,B). Red and green have fairly sharp spectra but blue is relatively wide. It also emits cyan and orange wavelength bands (not as sharp as R,G but sharper then B).
I suspect these two are from back-light (an is present on all devices I measured even phones)
Anaglyph glasses
these are band filters so they block all wavelength out of their range up to a scale
spectra
This is how it looks like (White on my LCD):
and how I see/interpret it:
bands are approximate (I have just homemade spectroscope with nonlinear scale and not spectrograph) and I am unable to take clear image of the spectra (have only automatic cameras). The back-light residue is blocked by my glasses completely and even the cyan filter passes it but it lower the brightness to point that is unseen to me on mine current LCD brightness settings.
calibration
The wavelengths you can use are just R,G,B (no matter the color).
Color is not the same as wavelength it is just subjective Human Perception not a physical variable !!!
so the color is irrelevant just filter image for one eye by setting all pixels with R,G only and the other one with R only and merge them together.
The only thing to calibrate is brightness. The filters in glasses should have the same blocking properties but the cheap one usually have not. That means one eye is getting different brightness then the other one which can cause discomfort so you can multiply pixels by the brightness (separate value for left and right eye). This is the only thing to calibrate the less quality filters the darker image you need.
anaglyphs colors
you can use B/W images these are most comfortable to look at. You can use Color images too but for some colors (like blue water) is this uncomfortable because one eye see it and the other not. Brain computes the rest but the feeling is uncomfortable after time. It is similar to hearing music that is off key.
It can be helped by adding white-ish component to such color but that will lose the color correctness of image it depends on what you need to do ...
anaglyphs eye distance
I am from central Europe so all data bellow are from that region !!!
average distance of human eyes view axises is 6.5 cm.
male horizontal FOV angle is 90 degree (including peripheral vision)
male horizontal FOV angle is 60 degree (excluding peripheral vision)
so if your anaglyph render has real sizes then set the FOV and cameras distances accordingly. If not then you also should add horizontal camera distance to calibration because the depth perception is affected also by:
the distance of viewer to monitor
their subjective depth perception
the scale the objects are rendered (also the monitor/image size)

Related

Get white balance (K) from white reference

I am looking for an algorithm which calculates the color temperature (in K) which is the used to set the color temperature in a digital camera. As input the algorithm gets a captured white area of a photo (which is not white if the white balance is wrong). The algorithm should estimate the color temperature until the white area is realy white (I hope it's clear what I mean).
One straight forward algorithm would be to linearly probe all temperatures, e.g. set the temperature -> capture a picture -> check color of white area and then select the best match.
But how is this correctly done, assuming that I can only capture photos and set the color temperature in the camera (for instance there is no infomation about the color matrices used for white balance calculation, or any other information I could use for the calculation)?
Regards,
First of all, in digital photography the white balance is not done with a white area, it's done whith middle grey (exactly with a grey of 18% reflectance in visible light, see this 18% grey card.)
For a correct white balance the sampling pixels must have a RGB value of #777777 (119, 119, 119), if the shoot is well exposed (not underexposed nor overexposed). In any case will be met that R=G=B in neutral white balanced, irrespective of the color temperature light when you shooted your camera or you camera W/B setting.
For other values, you can take some samplers and check their tone curves in Camera Raw (it shows you the color temperature in K).

Image Effect with Dark Borders

I was creating an effects library for a PhotoBooth App. I have created effects like Black/White, Vintage, Sepia, Retro etc. etc.
I wanted to create a few effects now in which I wanted to have a Dark Border at the edges which kind of form a frame for the image .. something like this -> Example Effect
How can I do this using Pixel Bender and Flash ?
The effect you are describing is called vignetting. It is basically just darkening the pixels with some weight that changes depending on distance from the center of the image. In image editing it corresponds to overlaying the image with black color and applying a circular or elliptic mask to it, for example:
(source: johnhpanos.com)
You can do this by several methods depending on how you operate with image and its pixels. For example by multiplying the pixels by a weight coefficient that is smaller when closer to the center and bigger when farther away from it. The distance can be calculated from the difference between pixel coordinates.

What does it mean to change the color channel?

Does it mean to control the combination between an image and a color overlay applied to it depending on the color space used (RGB, RGBA, CMYK, Lab, Grayscale, HSL, HSLA)? Or does it mean to change the color layer used in combination with other layers to form the final image? (if so, what could be changed in what regard?).
RGB are abbreviations for three color channels (red, green and blue). They represent specific frequencies of light. Inside each color channel is a range of intensity and a level of saturation. This model of colors is commonly taught in school and is how most people understand colors and mixing them. A different way to represent colors is HSL which stands for Hue, Saturation and Level. Here the Hue is the frequency of the color, while the Saturation can be like the contrast level, and Level is the amount of black. HSL (A stands for Alpha or transparency) is actually a much more programmer centric way of working with color (although most programmers seem to learn the RGB Hex values for colors). There is a great website called Mothereffing HSL which lets you play with HSL values to better understand them. CMYK is for pigments (which mix differently than light) and is found on printers. Same basic idea as RGB just with Cyan Magenta Yellow and Black. Now because light and pigments don't mix the same way there is a lot of work devoted to converting one color system to another (so you can see on your screen what will eventually come out of your printer). These systems are not perfectly aligned however so the goal is to get acceptability close.
All of these colors when presented on a graph are called the color space.

Algorithm for "neon glow" graphics programming

I am searching for an article or tutorial that explains how one can draw primitive shapes (mainly simple lines) with a (neon) glow effect on them in the graphical output of a computer program. I do not want to do some sophisticated stuff like for example in modern first pirson shooters or alike. I am more in a search for a simple solution, like the lines in that picture: http://tjl.co/blog/wp-content/uploads/2009/05/NeonStripes.jpg -- but of course drawn by a computer program in my case.
The whole thing should run on a modern smart phone, so the hardware is a bit limited.
I do know a bit about OpenGL, but not too much, so unfortunately I am a bit lost here. Did some research on Google ("glow effect algoritm" and similar), but found either highly complex stuff for 3D games, or tutorials for Photoshop & co.
So what I would really need is an in-depth article on that subject, but not on a very advanced level. I hope thats even possible... I have just started with OpenGL, did some minor graphics programming in the past, but I am a long-year programmer now, so I would understand technical papers in general.
Does anyone of you know of such an article/paper/tutorial/anything?
Thanks in advance for all good advices!
Cheers!
Matthias
Its jus a bunch of lines with different brightness/transperency. Basically, if you want a glow effect for 1px line, in a size of 20 pixels, then you draw 41 lines with width of 1 px. The middle line is with your base colour, other lines get colours that gradiently go from base color to 100% transperency (like in your example) or darkest colour variant (if you have black background, no transparency).
That is it. :)
This isn't something I've ever done, but looking at your example, the basic approach I'd use to try and recreate it would be...
Start with an algorithm for drawing a filled shape large enough to include the original shape and the glow. For example, a rectangle becomes a slightly larger rectangle, but with rounded corners. An infinitessimally-wide line becomes a thickened line with semi-circular caps. Subtract out the original shape (and fill the pixels for that normally).
For each pixel in the glow, the colour depends on the shortest distance to any part of the original shape. This normally reduces to the distance to the nearest point on a line (e.g. one edge of a rectangle).
The distance is translated to a colour value using probably Hue-Saturation-Value or a similar colour scheme, as well as reducing alpha (increasing transparency). For neon glows, you probably want constant hue, decreasing brightness, maybe increasing saturation, and decreasing alpha.
Translate the HSV/whatever colour value to RGB for output. See this question.
EDIT - I should probably have said HSL rather than HSV - in HSL, if L is at it's maximum value, the resulting colour is always white. For HSV, that's only true if saturation is also at zero. See http://en.wikipedia.org/wiki/HSL_and_HSV
The real trick is that even on a phone these days, I'd guess you probably should use hardware (shaders) for this - sorry, I don't know how that's done.
The "painters algorithm" overlaying of gradually smaller shapes that others have described here is also a possibility, but (1) possibly slower, depending on implementation issues, and (2) you may need to draw to an off-screen buffer, with some special handling for the alpha channel, then blit back to the screen to handle the transparency correctly - if you need transparency, that is.
EDIT - Silly me. An alternative approach is to apply a blur to your original shape (in greyscale), but instead of writing out the blurred pixels directly, apply the colour-transformation to each blurred pixel value.
A blur is basically a weighted moving average. Technically, a finite impulse response filter is implemented using a convolution, but the maths for that is a tad awkward and if you just want "a blur" of about the right size, draw a grayscale circle of pixels as your "weights" image.
The blur in this case basically replaces the distance-from-shape calculation.
_____________________
| |
----|---------------------|-----> line
|_____________________|
gradient block
Break up your line into small non-overlapping blocks. Use whatever graphics primitive you have to draw a tilted rectangular gradient: the center is at 100% and the outer edge is at 0%.
Don't draw it on the image yet; you want to blend it with the image. Using regular transparency will just make it look like a random pipe or pole or something (unless you draw a white line, and your background is dark).
Here are two choices of blending mode:
color dodge: [blended pixel value] = (1-[overlay's pixel value]) / [bottom pixel value]
linear dodge: [blended pixel value] = max([overlay's pixel value]+[bottom pixel value], 1)
Then draw the line above the glow.
If you want to draw a curved "neon" line, simply draw it as a sequence of superimposed "neon dots" where each "neon dot" is a small circular image with transparency going from 0% at the origin to 100% at the edge of the circle.

Edge Detection and transparency

Using images of articles of clothing taken against a consistent background, I would like to make all pixels in the image transparent except for the clothing. What is the best way to go about this? I have researched the algorithms that are common for this and the open source library opencv. Aside from rolling my own or using opencv is there an easy way to do this? I am open to any language or platform.
Thanks
If your background is consistend in an image but inconsistent across images it could get tricky, but here is what I would do:
Separate the image into some intensity/colour form such as YUV or Lab.
Make a histogram over the colour part. Find the most occuring colour, this is (most likely) your background (update) maybe a better trick here would be to find the most occuring colour of all pixels within one or two pixels from the edge of the image.
Starting from the eddges of the image, set all pixels that have that colour and are connected to the edge through pixels of that colour to transparent.
The edge of the piece of clothing is now going to look a bit ugly because it consist of pixels that gain their colour from both the background and the piece of clothing. To combat this you need to do a bit more work:
Find the edge of the piece of clothing through some edge detection mechanism.
Replace the colour of the edge pixels with a blend of the colour just "inside" the edge pixel (i.e. the colour of the clothing in that region) and transparent (if your output image format supports that).
If you want to get really fancy, you increase the transparency depending on how much "like" the background colour the colour of that pixel is.
Basically, find the color of the background and subtract it, but I guess you knew this. It's a little tricky to do this all automatically, but it seems possible.
First, take a look at blob detection with OpenCV and see if this is basically done for you.
To do it yourself:
find the background: There are several options. Probably easiest is to histogram the image, and the large number of pixels with similar values are the background, and if there are two large collections, the background will be the one with a big hole in the middle. Another approach is to take a band around the perimeter as the background color, but this seems inferior as, for example, reflection from a flash could dramatically brighten more centrally located background pixels.
remove the background: a first take at this would be to threshold the image based on the background color, and then run the "open" or "close" algorithms on this, and then use this as a mask to select your clothing article. (The point of open/close is to not remove small background colored items on the clothing, like black buttons on a white blouse, or, say, bright reflections on black clothing.)
OpenCV is a good tool for this.
The trickiest part of this will probably be at the shadow around the object (e.g. a black jacket on a white background will have a continuous gray shadow at some of the edges and where to make this cut?), but if you get this far, post another question.
if you know the exact color intensity of the background and it will never change and the articles of clothing will never coincide with this color, then this is a simple application of background subtraction, that is everything that is not a particular color intensity is considered an "on" pixel, one of interest. You can then use connected component labeling (http://en.wikipedia.org/wiki/Connected_Component_Labeling) to figure out seperate groupings of objects.
for a color image, with the same background on every pictures:
convert your image to HSV or HSL
determine the Hue value of the background (+/-10): do this step once, using photoshop for example, then use the same value on all your pictures.
perform a color threshold: on the hue channel exclude the hue of the background ([0,hue[ + ]hue, 255] typically), for all other channels include the whole value range (0 to 255 typically). this will select pixels which are NOT the background.
perform a "fill holes" operation (normally found along blob analysis or labelling functions) to complete the part of the clothes which may have been of the same color than the background.
now you have an image which is a "mask" of the clothes: non-zero pixels represents the clothes, 0 pixels represents the background.
this step of the processing depends on how you want to make pixels transparent: typically, if you save your image as PNG with an alpha (transparency) channel, use a logical AND (also called "masking") operation between the alpha channel of the original image and the mask build in the previous step.
voilĂ , the background disappeared, save the resulting image.

Resources