White balance/color temperature algorithm - algorithm

I have a picture(RGB) and i wanna adjust color temperature, like in Lightroom. I try converting RGB to LAB and adjust B channel, but it's wrong. May be someone can help with sample code or algorithm of color temperature implementation?
Like this:
rgb temperature(int temp, rgb color);
where value - temperature, color - input color (from image), return value - output color (adjusted)
Now, I use this algorithm. RGBtoLAB and LABtoRGB I took from: http://www.easyrgb.com/index.php?X=MATH&H=01#text1
for (int i = 0; i < img.pixCount; i++)
{
lab32f lab = RGBtoLAB(img.getPixel(i));
lab.b += temp.value; // -100..100
img.setPixel(i, LABtoRGB(lab));
}
Result images:

Related

How to disable Center Color in polygon with multicolored gradient?

I am trying to build a polygon using only two colors for all vertex. But the gdiplus library automatically inserts an white center color blending all the figure. I would like to disable the center color, instead workarounding it by using SetCenterColor() available in PathGradientBrush class. Shifting the default position using SetCenterPoint() to a far way position is very inelegant. Is that possible?
Thanks
A sample follows:
CMyGDIPlus gdi(this); // use your class instead
using namespace Gdiplus;
Graphics & graphics = gdi.GetGraphics();
graphics.SetSmoothingMode(SmoothingModeNone);
Gdiplus::Rect gRect;
graphics.GetVisibleClipBounds(&gRect);
int i;
int colorSize = 4;
GraphicsPath path;
Point arrPoint[4];
Color arrColors[4];
arrPoint[0].X = gRect.GetLeft();
arrPoint[0].Y = gRect.GetTop();
arrPoint[1].X = gRect.GetRight();
arrPoint[1].Y = gRect.GetTop()+100;
arrPoint[2].X = gRect.GetRight();
arrPoint[2].Y = gRect.GetBottom();
arrPoint[3].X = gRect.GetLeft();
arrPoint[3].Y = gRect.GetBottom()-100;
for(i = 0; i < colorSize; i++)
{
if(i < 2)
arrColors[i].SetFromCOLORREF(RGB(0, 128, 0)); // green
else
arrColors[i].SetFromCOLORREF(RGB(0, 0, 192)); // blue
}
path.AddLines(arrPoint, 4);
PathGradientBrush pathBrush(&path);
pathBrush.SetSurroundColors(arrColors, &colorSize);
pathBrush.SetGammaCorrection(TRUE);
graphics.FillPath(&pathBrush, &path);
You only need to calculate the color value of the center point.
To average the values (e.g. (r1 + r2) / 2). This works better for lightening/darkening colors and creating gradients.
Refer: Algorithm for Additive Color Mixing for RGB Values
Add : pathBrush.SetCenterColor(Color(0, 128*0.5, 192*0.5));
Debug:

calculate the temperature in a thermal image matlab

what I am trying to do is calculate the temperature of a selected area in an image
my code:
M=imread('IR003609.BMP');
a = min(M(:)); % find the minimum temperature in the image
b = max(M(:)); % find the maximum temperature in the image
imshow(M,[a b]);
h = roipoly();
maskOfROI =h;
selectedValues = M(maskOfROI);
averageTemperature =mean(selectedValues)
maxTemperature = max(selectedValues)
minTemperature = min(selectedValues)
my image is this with the mouth area selected
enter image description here
Then the values ​​that he throws at me are these:
averageTemperature =
64.0393
maxTemperature =
uint8
255
minTemperature =
uint8
1
Now my questions are, is the program throwing the correct temperature values ​​(comparing the values ​​seen in the image)? or what values ​​are emissivity?
if they are wrong values ​​how could I solve it?
please help
I see that the color bar is the hue of HSV so I suggest you convert to temperature along these lines: you convert to HSV, use the first layer, then rescale to fit 31-39 deg. And the colors seem to be flipped, so flip them upside down.
M = imread('jQLo5.jpg');
Mhsv = rgb2hsv(M);
maxTemp = 39;
minTemp = 31;
Mtemp = (1-Mhsv(:,:,1))*(maxTemp-minTemp)+minTemp;
figure;
imagesc(Mtemp)
colormap(flipud(hsv))
colorbar

How to extract color shade from a given sample image to convert another image using color of sample image?

I have a sample image and a target image. I want to transfer the color shades of sample image to target image. Please tell me how to extract the color from sample image.
Here the images:
input source image:
input map for desired output image
output image
You can use a technique called "Histogram matching" (another description)
Basically, you use the histogram for your source image as a goal and transform the values for each input map pixel to get the output histogram as close to source as possible. You do it for each rgb channel of the image.
Here is my python code for that:
from scipy.misc import imsave, imread
import numpy as np
imsrc = imread("source.jpg")
imtint = imread("tint_target.jpg")
nbr_bins=255
imres = imsrc.copy()
for d in range(3):
imhist,bins = np.histogram(imsrc[:,:,d].flatten(),nbr_bins,normed=True)
tinthist,bins = np.histogram(imtint[:,:,d].flatten(),nbr_bins,normed=True)
cdfsrc = imhist.cumsum() #cumulative distribution function
cdfsrc = (255 * cdfsrc / cdfsrc[-1]).astype(np.uint8) #normalize
cdftint = tinthist.cumsum() #cumulative distribution function
cdftint = (255 * cdftint / cdftint[-1]).astype(np.uint8) #normalize
im2 = np.interp(imsrc[:,:,d].flatten(),bins[:-1],cdfsrc)
im3 = np.interp(imsrc[:,:,d].flatten(),cdftint, bins[:-1])
imres[:,:,d] = im3.reshape((imsrc.shape[0],imsrc.shape[1] ))
imsave("histnormresult.jpg", imres)
The output for you samples will look like that:
You could also try making the same in HSV colorspace - it might give better results.
I think the hardest part is to determine the dominant color of the first image. Just looking at it, with all the highlights and shadows, the best overall color will be the one that has the highest combination of brightness and saturation. I start with a blurred image to reduce the effects of noise and other anomalies, then convert each pixel to the HSV color space for the brightness and saturation measurement. Here's how it looks in Python with PIL and colorsys:
blurred = im1.filter(ImageFilter.BLUR)
ld = blurred.load()
max_hsv = (0, 0, 0)
for y in range(blurred.size[1]):
for x in range(blurred.size[0]):
r, g, b = tuple(c / 255. for c in ld[x, y])
h, s, v = colorsys.rgb_to_hsv(r, g, b)
if s + v > max_hsv[1] + max_hsv[2]:
max_hsv = h, s, v
r, g, b = tuple(int(c * 255) for c in colorsys.hsv_to_rgb(*max_hsv))
For your image I get a color of (210, 61, 74) which looks like:
From that point it's just a matter of transferring the hue and saturation to the other image.
The histogram matching solutions above did not work for me. Here is my own, based on OpenCV:
def match_image_histograms(image, reference):
chans1 = cv2.split(image)
chans2 = cv2.split(reference)
new_chans = []
for ch1, ch2 in zip(chans1, chans2):
hist1 = cv2.calcHist([ch1], [0], None, [256], [0, 256])
hist1 /= hist1.sum()
hist2 = cv2.calcHist([ch2], [0], None, [256], [0, 256])
hist2 /= hist2.sum()
lut = np.searchsorted(hist1.cumsum(), hist2.cumsum())
new_chans.append(cv2.LUT(ch1, lut))
return cv2.merge(new_chans).astype('uint8')
obtain average color from color map
ignore saturated white/black colors
convert light map to grayscale
change dynamic range of lightmap to match your desired output
I use max dynamic range. You could compute the range of color map and set it for light map
multiply the light map by avg color
This is how it looks like:
And this is the C++ source code
//picture pic0,pic1,pic2;
// pic0 - source color
// pic1 - source light map
// pic2 - output
int x,y,rr,gg,bb,i,i0,i1;
double r,g,b,a;
// init output as source light map in grayscale i=r+g+b
pic2=pic1;
pic2.rgb2i();
// change light map dynamic range to maximum
i0=pic2.p[0][0].dd; // min
i1=pic2.p[0][0].dd; // max
for (y=0;y<pic2.ys;y++)
for (x=0;x<pic2.xs;x++)
{
i=pic2.p[y][x].dd;
if (i0>i) i0=i;
if (i1<i) i1=i;
}
for (y=0;y<pic2.ys;y++)
for (x=0;x<pic2.xs;x++)
{
i=pic2.p[y][x].dd;
i=(i-i0)*767/(i1-i0);
pic2.p[y][x].dd=i;
}
// extract average color from color map (normalized to unit vecotr)
for (r=0.0,g=0.0,b=0.0,y=0;y<pic0.ys;y++)
for (x=0;x<pic0.xs;x++)
{
rr=BYTE(pic0.p[y][x].db[picture::_r]);
gg=BYTE(pic0.p[y][x].db[picture::_g]);
bb=BYTE(pic0.p[y][x].db[picture::_b]);
i=rr+gg+bb;
if (i<400) // ignore saturated colors (whiteish) 3*255=white
if (i>16) // ignore too dark colors (whiteish) 0=black
{
r+=rr;
g+=gg;
b+=bb;
}
}
a=1.0/sqrt((r*r)+(g*g)+(b*b)); r*=a; g*=a; b*=a;
// recolor output
for (y=0;y<pic2.ys;y++)
for (x=0;x<pic2.xs;x++)
{
a=DWORD(pic2.p[y][x].dd);
rr=r*a; if (rr>255) rr=255; pic2.p[y][x].db[picture::_r]=BYTE(rr);
gg=g*a; if (gg>255) gg=255; pic2.p[y][x].db[picture::_g]=BYTE(gg);
bb=b*a; if (bb>255) bb=255; pic2.p[y][x].db[picture::_b]=BYTE(bb);
}
I am using own picture class so here some members:
xs,ys size of image in pixels
p[y][x].dd is pixel at (x,y) position as 32 bit integer type
p[y][x].db[4] is pixel access by color bands (r,g,b,a)
[notes]
If this does not meet your needs then please specify more and add more images. Because your current example is really not self explanatonary
Regarding previous answer, one thing to be careful with:
once the CDF will reach its maximum (=1), the interpolation will get mislead and will match wrongly your values. To avoid this, you should provide the interpolation function only the part of CDF meaningful (not after where it reaches 1) and the corresponding bins. Here the answer adapted:
from scipy.misc import imsave, imread
import numpy as np
imsrc = imread("source.jpg")
imtint = imread("tint_target.jpg")
nbr_bins=255
imres = imsrc.copy()
for d in range(3):
imhist,bins = np.histogram(imsrc[:,:,d].flatten(),nbr_bins,normed=True)
tinthist,bins = np.histogram(imtint[:,:,d].flatten(),nbr_bins,normed=True)
cdfsrc = imhist.cumsum() #cumulative distribution function
cdfsrc = (255 * cdfsrc / cdfsrc[-1]).astype(np.uint8) #normalize
cdftint = tinthist.cumsum() #cumulative distribution function
cdftint = (255 * cdftint / cdftint[-1]).astype(np.uint8) #normalize
im2 = np.interp(imsrc[:,:,d].flatten(),bins[:-1],cdfsrc)
if (cdftint==1).sum()>0:
idx_max = np.where(cdftint==1)[0][0]
im3 = np.interp(im2,cdftint[:idx_max+1], bins[:idx_max+1])
else:
im3 = np.interp(im2,cdftint, bins[:-1])
Enjoy!

Keeping only the red/green/blue part of the image

I have made a very basic algorithm which extracts only the red / green / blue pixels of the image and displays them. However, it works well on some images and produces unexpected results in some. Like when I want to keep only green , it also keeps turquoise.
Turquoise is a shade of green but it is not what I want to display. I only want things that are 'visually' green.
Here is a sample output that shows what has gone wrong:
The algorithm picked up the turquoiose color of the flower pot on which the dog sits. The original image is here.
My algorithm is below (for the green one.) All the algorithms are akin to each other.
void keepGreen() {
for (int i = 0; // iterate over the pixels of the image
i < img.pixels.length;
i++) {
float inputRed = red(img.pixels[i]); // extract red
float inputGreen = green(img.pixels[i]); // extract green
float inputBlue = blue(img.pixels[i]); // extract blue
int pixel = -1;
float outputRed = -1;
float outputGreen = -1;
float outputBlue = -1;
if(inputRed <= inputGreen*0.9 && inputBlue <= inputGreen*0.9){ // check if the pixel is visually green
outputRed = inputRed; // yes, let it stay
outputGreen = inputGreen;
outputBlue = inputBlue;
}else{ // no, make it gray
int mostProminent =(int) max(inputRed, inputGreen, inputBlue);
int leastProminent =(int) min(inputRed, inputGreen, inputBlue);
int avg = (int) ((mostProminent + leastProminent) / 2);
outputRed = avg;
outputGreen = avg;
outputBlue = avg;
pixel = color(avg, avg, avg);
}
img.pixels[i] = color(outputRed, outputGreen, outputBlue); // set the pixel to the new value
}
img.updatePixels(); // update the image
image(img, WIDTH/2, HEIGHT/2, calculatedWidth, calculatedHeight); // display
}
How can I avoid those errors ?
Experiment with raising the red and blue thresholds individually, i.e inputGreen * 0.8 instead of inputGreen * 0.9 Use a tool like Instant Eyedropper or Pixel Picker to verify the RGB values in those colors that you don't want, and use that as feedback to set the thresholds for elimination of the colors that you don't want.
You might also want to consider the luminance level in your calculations. The pixels being picked up on the flower pot are darker than the other pixels on the flower pot.
Just because Blue is less than Green doesn't mean the pixel doesn't look green. For example, turquoise might be red=50, blue=200, green=150. Perhaps you need to (also) gray out pixels that have substantial green in their own right, regardless of red/blue.

Retrieve color information from images

I need to determine the amount/quality of color in an image in order to compare it with other images and recommend a user (owner of the image) maybe he needs to print it in black and white and not in color.
So far I'm analyzing the image and extracting some data of it:
The number of different colors I find in the image
The percentage of color in the whole page (color pixels / total pixels)
For further analysis I may need other characteristic of these images. Do you know what else is important (or I'm missing here) in image analysis?
After some time I found a missing characteristic (very important) which helped me a lot with the analysis of the images. I don't know if there is a name for that but I called it the average color of the image:
When I was looping over all the pixels of the image and counting each color I also retrieved the information of the RGB values and summarized all the Reds, Greens and Blues of all the pixels. Just to come up with this average color which, again, saved my life when I wanted to compare some kind of images.
The code is something like this:
File f = new File("image.jpg");
BufferedImage im = ImageIO.read(f);
int tot = 0;
int red = 0;
int blue= 0;
int green = 0;
int w = im.getWidth();
int h = im.getHeight();
// Going over all the pixels
for (int i=0;i<w;i++){
for (int j=0;j<h;j++){
int pix = im.getRGB(i, j); //
if (!sameARGB(pix)) { // Compares the RGB values
tot+=1;
red+=pix.getRed();
green+=pix.getGreen();
blue+=pix.getBlue();
}
}
}
And you should get the results like this:
// Percentage of color on the image
double per = (double)tot/(h*w);
// Average color <-------------
Color c = new Color((double)red/tot,(double)green/tot,(double)blue/tot);

Resources