Get white balance (K) from white reference - algorithm

I am looking for an algorithm which calculates the color temperature (in K) which is the used to set the color temperature in a digital camera. As input the algorithm gets a captured white area of a photo (which is not white if the white balance is wrong). The algorithm should estimate the color temperature until the white area is realy white (I hope it's clear what I mean).
One straight forward algorithm would be to linearly probe all temperatures, e.g. set the temperature -> capture a picture -> check color of white area and then select the best match.
But how is this correctly done, assuming that I can only capture photos and set the color temperature in the camera (for instance there is no infomation about the color matrices used for white balance calculation, or any other information I could use for the calculation)?
Regards,

First of all, in digital photography the white balance is not done with a white area, it's done whith middle grey (exactly with a grey of 18% reflectance in visible light, see this 18% grey card.)
For a correct white balance the sampling pixels must have a RGB value of #777777 (119, 119, 119), if the shoot is well exposed (not underexposed nor overexposed). In any case will be met that R=G=B in neutral white balanced, irrespective of the color temperature light when you shooted your camera or you camera W/B setting.
For other values, you can take some samplers and check their tone curves in Camera Raw (it shows you the color temperature in K).

Related

How to implement a procedure of calibration red and cyan colors of monitor for concrete red-cyan anaglyph glasses?

I am developing an application for treatment of children. It must show different images for left and right eyes. I decided to use cheap red-cyan glasses for separating the fields of view of the eyes. The first eye will see only red images the second one - only cyan.
The problem is that colors on monitor are not really red and cyan. Also glasses are not ideal. I need to implement the calibration procedure for searching the best red and cyan colors for current monitor and glasses. I mean I need to change white (color of background), red and cyan classes on some more suitable colors to make red and cyan colors visible only for one eye.
Does anybody know any algorithms for calibrating anaglyph colors? I think I need to implement a special UI for calibrating colors. I am developing an application for iOS and Android.
You obviously lack the background knowledge.
Monitors
Nowadays are used mostly LCDs these emits 3 basic wavelength bands (R,G,B). Red and green have fairly sharp spectra but blue is relatively wide. It also emits cyan and orange wavelength bands (not as sharp as R,G but sharper then B).
I suspect these two are from back-light (an is present on all devices I measured even phones)
Anaglyph glasses
these are band filters so they block all wavelength out of their range up to a scale
spectra
This is how it looks like (White on my LCD):
and how I see/interpret it:
bands are approximate (I have just homemade spectroscope with nonlinear scale and not spectrograph) and I am unable to take clear image of the spectra (have only automatic cameras). The back-light residue is blocked by my glasses completely and even the cyan filter passes it but it lower the brightness to point that is unseen to me on mine current LCD brightness settings.
calibration
The wavelengths you can use are just R,G,B (no matter the color).
Color is not the same as wavelength it is just subjective Human Perception not a physical variable !!!
so the color is irrelevant just filter image for one eye by setting all pixels with R,G only and the other one with R only and merge them together.
The only thing to calibrate is brightness. The filters in glasses should have the same blocking properties but the cheap one usually have not. That means one eye is getting different brightness then the other one which can cause discomfort so you can multiply pixels by the brightness (separate value for left and right eye). This is the only thing to calibrate the less quality filters the darker image you need.
anaglyphs colors
you can use B/W images these are most comfortable to look at. You can use Color images too but for some colors (like blue water) is this uncomfortable because one eye see it and the other not. Brain computes the rest but the feeling is uncomfortable after time. It is similar to hearing music that is off key.
It can be helped by adding white-ish component to such color but that will lose the color correctness of image it depends on what you need to do ...
anaglyphs eye distance
I am from central Europe so all data bellow are from that region !!!
average distance of human eyes view axises is 6.5 cm.
male horizontal FOV angle is 90 degree (including peripheral vision)
male horizontal FOV angle is 60 degree (excluding peripheral vision)
so if your anaglyph render has real sizes then set the FOV and cameras distances accordingly. If not then you also should add horizontal camera distance to calibration because the depth perception is affected also by:
the distance of viewer to monitor
their subjective depth perception
the scale the objects are rendered (also the monitor/image size)

Best natural way of coloring an icon/sprite

I wanna to color a sprite/icon with a transparent background and with shadows. I tried to shift the hue to all pixels but it looks not so natural and I have problems with the black and the white colors in an image. If an image tend to be black shifting the hue do not change the black in red or another color even shifting by 360 degrees.
Tried to color addicting and subtracting color and even in that case the black and the white tend to be colored or disappears at all.
Maybe should I put an image on the icon to achieve the coloring effect ?
Any suggestions on how to proceed.
I lost.
You've been asking a lot about this hue shifting thing, so I figured I'd try to work out an example: http://jsfiddle.net/EMujN/3/
Here's another that uses an actual icon: http://jsfiddle.net/EMujN/4/
There's a lot in there. There's a huge data URL which you can ignore unless you want to replace it. Here's the relevant part where we modify HSL.
//SHIFT H HERE
var hMod = .3;
hsl[0]=(hsl[0]+hMod)%1;
//MODIFY S HERE
var sMod = .6;
hsl[1]=Math.max(0,Math.min(1,
hsl[1]+sMod
));
//MODIFY L HERE
var lMod = 0;
hsl[2]=Math.max(0,Math.min(1,
hsl[2]+lMod
));
I've converted to HSL because it's a lot easier to accomplish what you want in that color space than RGB.
Without getting any more complex, you have three variables you can tune: how much to add to either Hue, Saturation, or Lightness. I have the lightness variable set to 0 because any higher and you will see some nasty JPEG artifacts (if you can find a decent .png that would be better, but I went with the first CC night image I could find).
I think the hue shift (yellow to green) looks pretty good though and I have maxed out the saturation, so even a normally white light appears bright purple. Like I said in my comment, you will need to increase the lightness and saturation if you want to colorize patches of black and white. Hopefully, you can figure out what you need from this example.
image used: http://commons.wikimedia.org/wiki/File:Amman_(Jordan)_at_night.jpg
I found a better solution by myself which can solve the problem with the black and white.
So basically the solution can be solved in multiple steps. Here I will define the steps. Later I'll provide some working code:
Get the image
Calculate the predominant color, averaging the image pixels or simply providing an input RGB value which is the predominant that your eye can catch.
If the predominant tends to be black or white, or both, the image has to be recolored with an addictive or subtractive method, addictive if black, subtractive if white. So basically all RGB pixels should be attenuated or sharpened until RED. I think that the best solution should be RED, because RED is first in the HUE scale, and this can help when we will hue-shift the pixels.
To have a unique algorithm which can work with different kind of images, not only black predominant or white, ideally the input the non-black and non-white predominant images should be pre-hueshifted manually, using photoshop or with another algorithm in a way that the new predominant color results to be RED too
After that the Hue shifting coloring is straighforward. We know that the predominant color is RED for all the images, and we'll shift the HUE values with a difference between the HSV value of the desired color and the HSV of the predominant color (RED).
Game over. We have a pretty universal way to color different images with hue shifting in a natural way.
Another question could be how to authomatically pre-shift the input images which predominant color is not black or white.
But this is another question.
Why this coloring method could be considered natural. Simply consider one thing. Generally the non dominant black or white colors are part of the shadows and light which gives a 3D feel to the images. On the other hand if my shoes are 100% black and i will tint them with some colors, they will no more be black. Color the dominant black cannot be achieved simply shifting the HSV parameters but other steps should be performed. The steps are the above described.

Selective Color of image

I have more then 1 week reading about selective color change of an image. It meand selcting a color from a color picker and then select a part of image in which I want to change the color and apply the changing of color form original color to color of color picker.
E.g. if I select a blue color in color picker and I also select a red part in the image I should be able to change red color to blue color in all the image.
Another example. If I have an image with red apples and oranges and if I select an apple on the image and a blue color in the color picket, then all apples should be changing the color from red to blue.
I have some ideas but of course I need something more concrete on how to do this
Thank you for reading
As a starting point, consider clustering the colors of your image. If you don't know how many clusters you want, then you will need methods to determine whether to merge or not two given clusters. For the moment, let us suppose that we know that number. For example, given the following image at left, I mapped its colors to 3 clusters, which have the mean colors as shown in the middle, and representing each cluster by its mean color gives the figure at right.
With the output at right, now what you need is a method to replace colors. Suppose the user clicks (a single point) somewhere in your image, then you know the positions in the original image that you will need to modify. For the next image, the user (me) clicked on a point that is contained by the "orange" cluster. Then he clicked on some blue hue. From that, you make a mask representing the points in the "orange" cluster and play with that. I considered a simple gaussian filter followed by a flat dilation 3x5. Then you replace the hues in the original image according to the produced mask (after the low pass filtering, the values on it are also considered as a alpha value for compositing the images).
Not perfect at all, but you could have a better clustering than me and also a much-less-primitive color replacement method. I intentionally skipped the details about clustering method, color space, and others, because I used only basic k-means on RGB without any pre-processing of the input. So you can consider the results above as a baseline for anything else you can do.
Given the image, a selected color, and a target new color - you can't do much that isn't ugly. You also need a range, some amount of variation in color, so you can say one pixel's color is "close enough" while another is clearly "different".
First step of processing: You create a mask image, which is grayscale and varying from 0.0 to 1.0 (or from zero to some maximum value we'll treat as 1.0), and the same size as the input image. For each input pixel, test if its color is sufficiently near the selected color. If it's "the same" or "close enough" put 1.0 in the mask. If it's different, put 0.0. If is sorta borderline, put an in-between value. Exactly how to do this depends on the details of the image.
This might work best in LAB space, and testing for sameness according to the angle of the A,B coordinates relative to their origin.
Once you have the mask, put it aside. Now color-transform the whole image. This might be best done in HSV space. Don't touch the V channel. Add a constant to S, modulo 360deg (or mod 256, if S is stored as bytes) and multiply S by a constant chosen so that the coordinates in HSV corresponding to the selected color is moved to the HSV coordinates for the target color. Convert the transformed S and H, with the unchanged L, back to RGB.
Finally, use the mask to blend the original image with the color-transformed one. Apply this to each channel - red, green, blue:
output = (1-mask)*original + mask*transformed
If you're doing it all in byte arrays, 0 is 0.0 and 255 is 1.0, and be careful of overflow and signed/unsigned problems.

Want to understand why dithering algorithm can decrease color depth?

Sometimes I have a true colored image, by using dithering algorithm, I can reduce the color to just 256. I want to know how the dithering algorithm achieve this.
I understand that dithering can reduce the error, but how can the algorithm decrease color depth, especially from true color to just 256 colors or even less.
Dithering simulates a higher color depth by "mixing" the colors in a defined palette to create the illusion of a color that isn't really there. In reality, it's doing the same thing that your computer monitor is already doing: taking a color, decomposing it into primary colors, and displaying those right next to each other. Your computer monitor does it with variable-intensity red, green, and blue, while dithering does it with a set of fixed-intensity colors. Since your eye has limited resolution, it sums the inputs, and you perceive the average color.
In the same way, a newspaper can print images in grayscale by dithering the black ink. They don't need lots of intermediate gray colors to get a decent grayscale image; they simply use smaller or larger dots of black ink on the page.
When you dither an image, you lose information, but your eye perceives it in largely the same way. In this sense, it's a little like JPEG or other lossy compression algorithms which discard information that your eye can't see.
Dithering by itself does not decrease the number of colors. Rather, dithering is applied during the process of reducing the colors to make the artifacts of the color reduction less visible.
A color that is halfway between two other colors can be simulated by a pattern that is half of one color and half of the other. This can be generalized to other percentages as well. A color that is a mixture of 10% of one color and 90% of the other can be simulated by having 10% of the pixels be the first color and 90% of the pixels be the second. This is because the eye will tend to consider the random variations as noise and average them into the overall impression of the color of an area.
The most effective dithering algorithms will track the difference between the original image and the color-reduced one, and account for that difference while converting future pixels. This is called error diffusion - the errors on the current pixel are diffused into the conversions of other pixels.
The process of selecting the best 256 colors for the conversion is separate from dithering.

Organizing Images By Color

Maybe you've noticed but Google Image search now has a feature where you can narrow results by color. Does anyone know how they do this? Obviously, they've indexed information about each image.
I am curious what the best methods of analyzing an image's color data to allow simple color searching.
Thanks for any and all ideas!
Averaging the colours is a great start. Just downscale your image to 10% of the original size using a Bicubic or Bilinear filter (or something advanced anyway). This will vastly reduce the colour noise and give you a result which is closer to how humans perceive the image. I.e. a pixel-raster consisting purely of yellow and blue pixels would become clean green.
If you don't blur or downsize the image, you might still end up with an average of green, but the deviation would be huge.
The Google feature offers 12 colors with which to match images. So I would calculate the Lab coordinate of each of these swatches and plot the (a*, b*) coordinate of each of these colors on a two dimensional space. I'd drop the L* component because luminance (brightness) of the pixel should be ignored. Using the 12 points in the (a*, b*) space, I'd calculate a partitioning using a Voronoi Diagram. Then for a given image, I'd take each pixel, calculate its (a*, b*) coordinate. Do this for every pixel in the image and so build up the histogram of counts in each Voronoi partition. The partition that contains the highest pixel count would then be considered the image's 'color'.
This would form the basis of the algorithm, although there would be refinements related to ignoring black and white background regions which are perceptually not considered to be part of the subject of the image.
Average color of all pixels? Make a histogram and find the average of the 'n' peaks?

Resources