How to find RGB/HSV color parameters for color tracking? - algorithm

I would like to track a color in a set of images.
For this reason I use the algorithm of constant thresholding mentioned in
Introduction to Autonomous Mobile Robots. This method simply marks all those pixels that are among a minimum and a maximum threshold of red, green, blue (or hue, saturation, value in my case).
My problem is that - although HSV is less sensitive to changing light conditions - I still would like to set the thresholds from program to minimize the number of false positives and false negatives. In other words the algorithm would ensure that only a given set of pixels is marked in the end, for example a rectangle on a calibration image.
I know that the problem is a search in a 6-dimensional parameter space and I could come up with possible solutions but I am looking for other programmers' opinion and experience on this subject.
If that matters I try to implement it in C++ with OpenCV.

As far as I understand the question you are looking for procedure to calibrate 6 thresholds (min and max for each of the HSV channels) from a calibration image that contains your tracking marker. To achieve this I would:
first manually delineate the
region, in the calibration image,
where the marker appears
calculate that region's histograms, one for each of the
HSV channels
set the min and max thresholds to the histogram
percentiles 0.05 and 0.95
respectively
Not using the histogram's minimum and maximum values, but rather its 0.05 and 0.95 percentiles helps the measure be more robust to noise.
EDIT:
A modification of the second step:
If you want to minimize the error, you could establish a normilzed histogram of the marker and a normalized histogram of the environment (this can be 2 separate images) and subtract the latter from the first. The resulting marker histogram will have background pixel values attenuated. This will affect the values of the above mentioned percentiles.

Related

How to quantitatively measure the diversity of a set of images

I'm trying to measure the diversity of a set of images. I'm defining diversity as a qualitative measure of the overall amount of difference in a set of images, so a set of identical images has a diversity of 0.
So far, the approach I thought of is to take the average intensity of every pixel in the set, that will give an "average" image for the set. Then use the pixels in the "average" image to calculate the standard deviation for intensity of every pixel, creating a matrix of standard deviation values for every pixel. Then I can take the matrix norm of the standard deviation matrix- larger norms imply more diversity.
Another post (linked below) suggests that, to measure how closely an image is to a set of image, one can create a classifier and see with what tolerance value the new image can be accepted. This measures how closely one image matches a set of images, and doesn't measure the diversity of the set (unless it's performed many times, but I'm not sure how that would work).
Is there a better way of measuring the diversity of a set of images than just by taking the matrix norm of the standard deviation of every pixel? Any info is appreciated. Thank you!
Posts referenced:
Measuring how a new sample contributes to the diversity of a dataset
Clustering of images to evaluate diversity (Weka?)

Calculate contrast of a color image(RGB)

In black and white image,we can easily calculate the contrast by (total no. of white pixels - total no. of black pixels).
How can I calculate this for color(RGB) image?
Any idea will be appreciated?
You may use the standard Deviation of the grayscale image as measure for the contrast. This is called "RMS contrast". See https://en.wikipedia.org/wiki/Contrast_(vision)#RMS_contrast for Details.
Contrast is defined as the difference between the highest and lowest intensity value of the image. So you can easily calculate it from the respective histogram.
Example: If you have a plain white image, the lowest and highest value are both 255, thus the contrast is 255-255=0. If you have an image with only black (0) and white (255) you have a contrast of 255, the highest possible value.
The same method can be applied for color images if you calculate the luminescence of each pixel (and thus convert the image to greyscale). There are several different methods to convert images to greyscale, you can chose one you like.
To make the approach more sophisticated, it is advisable to ignore a certain percentage of pixels to account for outliers (otherwise a single white and black pixel would lead to "full contrast", regardless of all other pixels). Another approach would be to take the number of dark and light pixels into account, as described by #Yves Daoust. This approach has the flaw that one has to set an arbitrary threshold to determine which pixels count as dark/light (usually 127).
This does not have a single answer. One idea I can think of is to operate on each of the three channels separately, Red, Green and Blue. Compute the histogram of each channel and operate on it.
A simple google search resulted in many relevant algorithms, one of them that I have used is Root Mean Square (standard deviation of the pixel intensities).

matlab find peak images

I have a binary image below:
it's an image of random abstract picture, and by using matlab, what I wanna do is to detect, how many peaks does it have so I'll know that there are roughly 5 objects in it.
As you can see, there are, 5 peaks in it, so it means there are 5 objects in it.
I've tried using imregionalmax(), but I don't find it usefull, since my image already in binary image. I also tried to use regionprops('Area'), but it shows wrong number since there is no exact whitespace between each object. Thanks in advance
An easy way to do this would be to simply sum across the rows for each column and find the peaks of the result using findpeaks. In the example below, I have opted to use the inverse of the image which will result in positive peaks where the columns are.
rowSum = sum(1 - image, 1);
If we plot this, it looks like the bottom plot
We can then use findpeaks to identify the peaks in this plot. We will apply a 5-point moving average to it to help eliminate false peaks.
[peaks, locations, widths, prominences] = findpeaks(smooth(rowSum));
You can then select the "true" peaks by thresholding based on any of these outputs. For this example we can use prominences and find the more prominent peaks.
isPeak = prominences > 50;
nPeaks = sum(isPeak)
5
Then we can plot the peaks locations to confirm
plot(locations(isPeak), peaks(isPeak), 'r*');
If you have some prior knowledge about the expected widths of the peaks, you could adjust the smooth span to match this expected width and obtain some cleaner peaks when using findpeaks.
Using an expected width of 40 for your image, findpeaks was able to perfectly detect all 5 peaks with no false positive.
findpeaks(smooth(rowSum, 40));
As your they are peaks, they are vertical structures. So in this particular case, you case use projection histograms (also know as histogram projection function): you make all the black pixels fall as if they were effected by gravity. Then you will find a curve of black pixels on the bottom of your image. Then you can count the number of peaks.
Here is the algorithm:
Invert the image (black is normally the absence of information)
Histogram projection
Closing and opening in order to clean the signal and get the final result.
You can add a maxima detection to get the top of the peaks.

Dominant "color" of an image

I have the following image:
What I want to do is "id" the individual strips based on their dominant color. What is the best approach to do this?
What I've done is used the image's value (HSV) and make a distribution on that value's occurrence. The problem is, for strip0 values [27=32191, 28=5433, others=8] strip1 values [26=7107, 27=23111, others=22]. I can't get a definitive distinction.
The project's main goal is to compare an actual yellow-colored paper to the strips and determine which strip is the most similar.
First, since you know the boundaries of each strip in the reference image, the only problem possible here is that your reference image is noisy. A relatively overkill way to handle that is clustering the colors in each strip and taking the cluster's centroid as the representative color of the strip. In order to get a more meaningful response here, consider the CIELAB colorspace for this step. Doing this, and converting the results back to RGB, for the first strip I get the rgb triplet (0.949375, 0.879872, 0.147898), and for the second strip (0.945324, 0.857322, 0.129756) (each channel in range [0, 1]).
When you get a new image, you perform the same operation. But there are a lot of problems here. For instance, how are you handling the white balance in this input image ? Supposing you have no such problem, then now it is only a matter of finding the nearest color to the one you just found by the same process. To find the nearest color you have to use a meaningful colorspace for such thing too, and CIELAB is recommended again since the well established Delta-E functions are defined on it. See http://en.wikipedia.org/wiki/Color_difference for some such metrics, the simplest being the euclidean distance in CIELAB.
Calibrate your equipment. If you do not calibrate your equipment, you will have arbitrary errors between the test sample and the reference. Lighting is part of your equipment.
Use edge detection and your knowledge of the reference strip's geometry (strips are equal width) to determine sampling regions. For each sampling region, extract an internal patch.
For the test strip, compute an image where each pixel is the max difference within a sampling window (e.g. 5x5). This will let you identify a relatively homogeneous region which is dissimilar to the outside region (i.e. the paper). Extract a patch.
Use downsampling to find an integrated color for each patch per svnpenn's advice. You can look at other computation methods later, but this should work quite well.
For weights wh, ws, wv, compute similarity = whabs(h0-h1) + wsabs(s0-s1) + wv*abs(v0-v1) between the test color and each reference color. You can look at other distance measures later, but this should work quite well. Start with equal weights. One perk to this method is that it behaves well regardless of the dimension or combination of dimensions under which the reference strip varies.
Sort the results to find the most similar and second most similar matches. Note that similarity is set up so zero is an exact match, and a big number is a poor match. Use the ratio of these two results to estimate the quality of the most similar match - if the first two matches are very close, it's probably not a great match to either.
You can scan through all the colors and use a hashtable to keep track of how many pixels of each color there are.
Take those numbers and, remembering which colors they correspond to, sort them in decreasing order.
Look at the sorted list of numbers and find the difference between each consecutive pair of numbers. Keep track the indices in the list of the two numbers that resulted in each difference. Sort this difference list.
Look at the maximum number in the difference list. You now have the biggest drop-off between two sets of pixels. Go find which was the bigger one. Everything with this number of pixels and above is a dominant color. Everything below is a sub-dominant color. Now you know how many dominant colors you have, and what they are.
Should be pretty easy from there to do whatever it is you want to do.
The only time this wouldn't work is if some of the noise was of the same color as a strip, so much so that it corrupted your data.
In this case, you would use a different approach, which you can also use in the first case - looking at runs. Go through the pixels, and each time you find a new color, look at how many of the following pixels are of the same color.
Use the method described earlier to cluster the colors into dominant and non-dominant, for the same result.
In both cases, if you know that the picture is of vertical strips, you could limit the number of horizontal lines of colors you look at to make things go faster.
You could split the image into sections, then resize each section to one pixel. This is an example using the whole image
$ convert Y82IirS.jpg -resize 1x1 txt:
# ImageMagick pixel enumeration: 1,1,255,srgb
0,0: (220,176, 44) #DCB02C srgb(220,176,44)
Average colour of an image

How can I choose an image with higher contrast in PHP?

For a thumbnail-engine I would like to develop an algorithm that takes x random thumbnails (crop, no resize) from an image, analyzes them for contrast and chooses the one with the highest contrast. I'm working with PHP and Imagick but I would be glad for some general tips about how to compute contrast of imagery.
It seems that many things are easier than computing contrast, for example counting colors, computing luminosity,etc.
What are your experiences with the analysis of picture material?
I'd do it that way (pseudocode):
L[256] = {0,0,0...}
loop over each pixel:
luminance = avg(R,G,B)
increment L[luminance] by 1
for i = 0 to 255:
if L[i] < C: L[i] = 0 // C = threshold of your chose
find index of first and last non-zero value of L[]
contrast = last - first
In looking for the image "with the highest contrast," you will need to be very careful in how you define contrast for the image. In the simplest way, contrast is the difference between the lowest intensity and the highest intensity in the image. That is not going to be very useful in your case.
I suggest you use a histogram approach to describe the contrast of a given image and then compare the properties of the histograms to determine the image with the highest contrast as you define it. You could use a variety of well known containers to represent the histogram in code, or construct a class to meet your specific needs. (I am not implying that you need to create a histogram in the form of a chart – just a statistical representation of the intensity values.) You could use the variance of each histogram directly as a measure of contrast, or use the standard deviation if that is easier to work with.
The key really lies in how you define the contrast of the image. In general, I would define a high contrast image as one with values present for all, or nearly all, the possible values. And I would further add that in this definition of a high contrast image, the intensity values of the image will tend to be distributed across the range of possible values in a uniform way.
Using this approach, a low contrast image would tend to have relatively few discrete intensity values and they would tend to be closely grouped together rather than uniformly distributed. (As a general rule, they will also tend to be grouped toward the center of the range.)

Resources