Retrieve color information from images - image

I need to determine the amount/quality of color in an image in order to compare it with other images and recommend a user (owner of the image) maybe he needs to print it in black and white and not in color.
So far I'm analyzing the image and extracting some data of it:
The number of different colors I find in the image
The percentage of color in the whole page (color pixels / total pixels)
For further analysis I may need other characteristic of these images. Do you know what else is important (or I'm missing here) in image analysis?

After some time I found a missing characteristic (very important) which helped me a lot with the analysis of the images. I don't know if there is a name for that but I called it the average color of the image:
When I was looping over all the pixels of the image and counting each color I also retrieved the information of the RGB values and summarized all the Reds, Greens and Blues of all the pixels. Just to come up with this average color which, again, saved my life when I wanted to compare some kind of images.
The code is something like this:
File f = new File("image.jpg");
BufferedImage im = ImageIO.read(f);
int tot = 0;
int red = 0;
int blue= 0;
int green = 0;
int w = im.getWidth();
int h = im.getHeight();
// Going over all the pixels
for (int i=0;i<w;i++){
for (int j=0;j<h;j++){
int pix = im.getRGB(i, j); //
if (!sameARGB(pix)) { // Compares the RGB values
tot+=1;
red+=pix.getRed();
green+=pix.getGreen();
blue+=pix.getBlue();
}
}
}
And you should get the results like this:
// Percentage of color on the image
double per = (double)tot/(h*w);
// Average color <-------------
Color c = new Color((double)red/tot,(double)green/tot,(double)blue/tot);

Related

calculate the temperature in a thermal image matlab

what I am trying to do is calculate the temperature of a selected area in an image
my code:
M=imread('IR003609.BMP');
a = min(M(:)); % find the minimum temperature in the image
b = max(M(:)); % find the maximum temperature in the image
imshow(M,[a b]);
h = roipoly();
maskOfROI =h;
selectedValues = M(maskOfROI);
averageTemperature =mean(selectedValues)
maxTemperature = max(selectedValues)
minTemperature = min(selectedValues)
my image is this with the mouth area selected
enter image description here
Then the values ​​that he throws at me are these:
averageTemperature =
64.0393
maxTemperature =
uint8
255
minTemperature =
uint8
1
Now my questions are, is the program throwing the correct temperature values ​​(comparing the values ​​seen in the image)? or what values ​​are emissivity?
if they are wrong values ​​how could I solve it?
please help
I see that the color bar is the hue of HSV so I suggest you convert to temperature along these lines: you convert to HSV, use the first layer, then rescale to fit 31-39 deg. And the colors seem to be flipped, so flip them upside down.
M = imread('jQLo5.jpg');
Mhsv = rgb2hsv(M);
maxTemp = 39;
minTemp = 31;
Mtemp = (1-Mhsv(:,:,1))*(maxTemp-minTemp)+minTemp;
figure;
imagesc(Mtemp)
colormap(flipud(hsv))
colorbar

White balance/color temperature algorithm

I have a picture(RGB) and i wanna adjust color temperature, like in Lightroom. I try converting RGB to LAB and adjust B channel, but it's wrong. May be someone can help with sample code or algorithm of color temperature implementation?
Like this:
rgb temperature(int temp, rgb color);
where value - temperature, color - input color (from image), return value - output color (adjusted)
Now, I use this algorithm. RGBtoLAB and LABtoRGB I took from: http://www.easyrgb.com/index.php?X=MATH&H=01#text1
for (int i = 0; i < img.pixCount; i++)
{
lab32f lab = RGBtoLAB(img.getPixel(i));
lab.b += temp.value; // -100..100
img.setPixel(i, LABtoRGB(lab));
}
Result images:

Not loading white pixels in picture in processing2.0

This is for the programing language Processing (2.0).
Say I wish to load a not square image (lets use a green circle for the example). If I load this on a black background you can visibly see the white square of the image(aka all parts of image that aren't the green circle). How would I go about efficiently removing them?
It can not think of an efficient way to do it, I will be doing it to hundreds of pictures about 25 times a second(since they will be moving).
Any help would be greatly appreciated, the more efficient the code the better.
As #user3342987 said, you can loop through the image's pixels to see if each pixel is white or not. However, it's worth noting that 255 is white (not 0, which is black). You also shouldn't hardcode the replacement color, as they suggested -- what if the image is moving over a striped background? The best approach is to change all the white pixels into transparent pixels using the image's alpha channel. Also, since you mentioned you would be doing it "about 25 times a second", you shouldn't be doing these checks more than once-- it will be the same every time and would be wasteful. Instead, do it when the images are first loaded, something like this (untested):
PImage[] images;
void setup(){
size(400,400);
images = new PImage[10];
for(int i = 0; i < images.length; i++){
// example filenames
PImage img = loadImage("img" + i + ".jpg");
img.beginDraw();
img.loadPixels();
for(int p = 0; p < img.pixels.length; p++){
//color(255,255,255) is white
if(img.pixels[p] == color(255,255,255)){
img.pixels[p] = color(0,0); // set it to transparent (first number is meaningless)
}
}
img.updatePixels();
img.endDraw();
images[i] = img;
}
}
void draw(){
//draw the images as normal, the white pixels are now transparent
}
So, this will lead to no lag during draw() because you edited out the white pixels in setup(). Whatever you're drawing the images on top of will show through.
It's also worth mentioning that some image filetypes have an alpha channel built in (e.g., the PNG format), so you could also change the white pixels to transparent in some image editor and use those edited files for your sketch. Then your sketch wouldn't have to edit them every time it starts up.
Pixels are stored in the Pixels[] array, you can use a for loop to check to see if the value is 0 (aka white). If it is white load it as the black background.

Keeping only the red/green/blue part of the image

I have made a very basic algorithm which extracts only the red / green / blue pixels of the image and displays them. However, it works well on some images and produces unexpected results in some. Like when I want to keep only green , it also keeps turquoise.
Turquoise is a shade of green but it is not what I want to display. I only want things that are 'visually' green.
Here is a sample output that shows what has gone wrong:
The algorithm picked up the turquoiose color of the flower pot on which the dog sits. The original image is here.
My algorithm is below (for the green one.) All the algorithms are akin to each other.
void keepGreen() {
for (int i = 0; // iterate over the pixels of the image
i < img.pixels.length;
i++) {
float inputRed = red(img.pixels[i]); // extract red
float inputGreen = green(img.pixels[i]); // extract green
float inputBlue = blue(img.pixels[i]); // extract blue
int pixel = -1;
float outputRed = -1;
float outputGreen = -1;
float outputBlue = -1;
if(inputRed <= inputGreen*0.9 && inputBlue <= inputGreen*0.9){ // check if the pixel is visually green
outputRed = inputRed; // yes, let it stay
outputGreen = inputGreen;
outputBlue = inputBlue;
}else{ // no, make it gray
int mostProminent =(int) max(inputRed, inputGreen, inputBlue);
int leastProminent =(int) min(inputRed, inputGreen, inputBlue);
int avg = (int) ((mostProminent + leastProminent) / 2);
outputRed = avg;
outputGreen = avg;
outputBlue = avg;
pixel = color(avg, avg, avg);
}
img.pixels[i] = color(outputRed, outputGreen, outputBlue); // set the pixel to the new value
}
img.updatePixels(); // update the image
image(img, WIDTH/2, HEIGHT/2, calculatedWidth, calculatedHeight); // display
}
How can I avoid those errors ?
Experiment with raising the red and blue thresholds individually, i.e inputGreen * 0.8 instead of inputGreen * 0.9 Use a tool like Instant Eyedropper or Pixel Picker to verify the RGB values in those colors that you don't want, and use that as feedback to set the thresholds for elimination of the colors that you don't want.
You might also want to consider the luminance level in your calculations. The pixels being picked up on the flower pot are darker than the other pixels on the flower pot.
Just because Blue is less than Green doesn't mean the pixel doesn't look green. For example, turquoise might be red=50, blue=200, green=150. Perhaps you need to (also) gray out pixels that have substantial green in their own right, regardless of red/blue.

Algorithm to detect overlapping rows of two images

Let's say I have 2 images A and B as below.
Notice that the bottom of A overlaps with the top of B for n rows of pixels, denoted by the two red rectangles. A and B have the same number of columns but might have different number of rows.
Two questions:
Given A and B, how to determine n efficiently?
If B is somehow changed in a way that 30%-50% of its pixels are completely replaced (for example, imagine the top left area showing # of votes/answers/views is replaced with an ad banner). How to determine n?
If anyone can point to an algorithm or better yet, an implementation in any language (preferred C/C++, C#, Java and JavaScript), it is much appreciated.
If I understood correctly, you probably want to look at normalized cross correlation of greyscale versions of the two images. Where you have large images, or large overlapping regions, this is done most efficiently in the frequency domain using the FFTs of the images (or overlap areas) and is called phase correlation.
The basic steps I would take in your situation are as follows:
Extract the bottom half of the first image and the top half of the second image.
Convert both image patches to greyscale.
Perform FFT on each image patch (there are some details here relating to windowing and padding).
Calculate the complex conjugate of the two FFTs (same as correlation in spatial domain).
Do inverse FFT on the result.
Find the peak in the above to get the XY shift that best aligns the two images.
Having found the relative offset between the top and bottom image patches, you can easily calculate n as you required.
If you want to experiment without having to code the above from scratch, OpenCV has a number of functions for template matching, which you can easily try. See here for details.
If part of either image has been changed - e.g. by a banner ad - the above procedure still gives the best match, and the magnitude of the peak you find in step 6 gives an indication of the match "confidence" - so you can get a rough idea of how similar the two areas are.
I had a little play at doing this with ImageMagick. Here is the animation of what I did, and the explanation and code follow.
First I grabbed a couple of StackOverflow pages, using webkit2png, calling them a.png and b.png.
Then I cropped a rectangle out of the top-left of b.png and a column the same width, but the full height out of a.png
That gave me this:
and this
I now overlay the smaller rectangle from the second page onto the bottom of the strip from the first page. I then calculate the difference between the two images by subtracting one from the other and note that when the difference is zero, the pictures must be the same, and the output image will be black, so I have found the point at which they overlap.
Here is the code:
#!/bin/bash
# Grab page 2 as "A" and page 3 as "B"
# webkit2png -F -o A http://stackoverflow.com/questions?page=2&sort=newest
# webkit2png -F -o B http://stackoverflow.com/questions?page=3&sort=newest
BLOBH=256 # blob height
BLOBW=256 # blob width
# Get height of x.png
XHEIGHT=$(identify -format "%h" x.png)
# Crop a column 256 pixels out of a.png that doesn't contain adverts or junk, into x.png
convert a.png -crop ${BLOBW}x+0+0 x.png
# Crop a rectangle 256x256 pixels out of top left corner of b.png, into y.png
convert b.png -crop ${BLOBW}x${BLOBH}+0+0 y.png
# Now slide y.png up across x.png, starting at the bottom of x.png
# ... differencing the two images as we go
# ... stop when the difference is nothing, i.e. they are the same and difference is black image
lines=0
while :; do
OFFSET=$((XHEIGHT-BLOBH-1-lines))
if [ $OFFSET -lt 0 ]; then exit; fi
FN=$(printf "out-%04d.png" $lines)
diff=$(convert x.png -crop ${BLOBW}x${BLOBH}+0+${OFFSET} +repage \
y.png \
-fuzz 5% -compose difference -composite +write $FN \
\( +clone -evaluate set 0 \) -metric AE -compare -format "%[distortion]" info:)
echo $diff:$lines
((lines++))
done
n=$((BLOBH+lines))
The FFT solution might be more complex than you were hoping for.
For a general problem, that might be the only robust way.
For a simple solution, you need to start making assumptions.
For example, can you guarantee that the columns of the images line up (barring the noted changes)? This allows you to go down the path suggested by #n.m.
Can you cut the image into vertical strips, and consider a row matches if a sufficient proportion of the strips match?
[ This could be redone to use a few passes with difference column offsets if we need to be robust to that.]
This gives something like:
class Image
{
public:
virtual ~Image() {}
typedef int Pixel;
virtual Pixel* getRow(int rowId) const = 0;
virtual int getWidth() const = 0;
virtual int getHeight() const = 0;
};
class Analyser
{
Analyser(const Image& a, const Image& b)
: a_(a), b_(b) {}
typedef Image::Pixel* Section;
static const int numStrips = 16;
struct StripId
{
StripId(int r = 0, int c = 0)
: row_(r), strip_(c)
{}
int row_;
int strip_;
};
typedef std::unordered_map<unsigned, StripId> StripTable;
int numberOfOverlappingRows()
{
int commonWidth = std::min(a_.getWidth(), b_.getWidth());
int stripWidth = commonWidth/numStrips;
StripTable aHash;
createStripTable(aHash, a_, stripWidth);
StripTable bHash;
createStripTable(bHash, b_, stripWidth);
// This is the position that the bottom row of A appears in B.
int bottomOfA = 0;
bool canFindBottomOfAInB = canFindLine(a_.getRow(a_.getHeight() - 1), bHash, stripWidth, bottomOfA);
int topOfB= 0;
bool canFindTopOfBInA = canFindLine(b_.getRow(0), aHash, stripWidth, topOfB);
int topOFBfromBottomOfA = a_.getHeight() - topOfB;
// Expect topOFBfromBottomOfA == bottomOfA
return bottomOfA;
}
bool canFindLine(Image::Pixel* source, StripTable& target, int stripWidth, int& matchingRow)
{
Image::Pixel* strip = source;
std::map<int, int> matchedRows;
for(int index = 0; index < stripWidth; ++index)
{
Image::Pixel hashValue = getHashOfStrip(strip,stripWidth);
bool match = target.count(hashValue) > 0;
if (match)
{
++matchedRows[target[hashValue].row_];
}
strip += stripWidth;
}
// Can set a threshold requiring more matches than 0
if (matchedRows.size() == 0)
return false;
// FIXME return the most matched row.
matchingRow = matchedRows.begin()->first;
return true;
}
Image::Pixel* getStrip(const Image& im, int row, int stripId, int stripWidth)
{
return im.getRow(row) + stripId * stripWidth;
}
static Image::Pixel getHashOfStrip(Image::Pixel* strip, unsigned width)
{
Image::Pixel hashValue = 0;
for(unsigned col = 0; col < width; ++col)
{
hashValue |= *(strip + col);
}
}
void createStripTable(StripTable& hash, const Image& image, int stripWidth)
{
for(int row = 0; row < image.getHeight(); ++row)
{
for(int index = 0; index < stripWidth; ++index)
{
// Warning: Not this simple!
// If images are sourced from lossy intermediate and hence pixels not _exactly_ the same, need some kind of fuzzy equality here.
// Details are going to depend on the image format etc, but this is the gist.
Image::Pixel* strip = getStrip(image, row, index, stripWidth);
Image::Pixel hashValue = getHashOfStrip(strip,stripWidth);
hash[hashValue] = StripId(row, index);
}
}
}
const Image& a_;
const Image& b_;
};
If rows match exactly, then sort rows in both images and merge. Your duplicates are right there. Then go to the original images and find the longest contiguous streak of duplicates in A, such that the corresponding rows in B are also contiguous. Or just look near the top and the bottom of corresponding images.
If there are banner ads, the first thing that comes to mind is breaking the images into several vertical strips and doing that with each pair of strips separately.
Something like this will probably help:
First, traverse the image A from bottom upwards, search for a row with significant information in it. An "information" can be calculated, for example, by counting the total color shift across the row. Say, two adjacent pixels have colors #ffffff and #ff0000 - add 2.0 to total count. Have a series of thresholds ready, and lock on the first row that's reaching that threshold. The series can be "10.0, 0.1*row length, 0.15*row length, ..." to a reasonable limit. Then, traverse this array from topmost discovered downwards, take the corresponding row and search for its match in B from upside down. If found, and the threshold is big enough, take the next one in the array and calculate the position of its match, and compare. If succeed, you have locked a correct offset of B over A, and it equals height_of_A - first_row_index + first_row_match_index. If failed continue searching for the next row. If all matches failed, search for very last row of A from the very first row of B, up to the offset of the first row found from the bottom of A. If again failed, then the answer is 0. Of course, if using JPEG images, use threshold-match, as pixels might not be exact in A and B, perhaps with a tolerance to unmatched pixels as well.

Resources