I need to segment the image by 7 colors (red, orange, yellow, green, light-blue, blue, violet) as in the rainbow. Do you know how to do it? Any papers or algorithms may be. For example it can be done by assigning each triple (r, g, b) a color. But it is not effective as we got there 255^3 of combinations.
The "H" component of the HSV colourspace http://en.wikipedia.org/wiki/HSL_and_HSV, will give you a reasonable number representing the position on a (continuous) rainbow.
Then it is easy enough to divide that continuous space into seven segments of your choice.
Since you already have the 7 colors you need, you don't need to use clustering. A sensible starting point would be: For each pixel in the image find which of the 7 colors lies closest to it (using L2 distance on RGB) and assign that closest color to that pixel. You might be able to get better (more perceptually similar) results by converting first to some other color space, like CIE XYZ, however this will require experimentation.
If the colors are predefined then the solution is just to loop over every pixel and substitute with the closest representative. As carlosdc said may be some color space transformation can give better result than just (r1-r2)**2 + (g1-g2)**2 + (b1-b2)**2.
To make things faster a possible trick is to trade in some memory and caching the result of a given RGB triplet... i.e.
// Initialize the cache to 255
std::vector<unsigned char> cache(256*256*256, 255);
for (int y=0; y<h; y++)
{
unsigned char *pixel = img + y*w*3 + x;
for int (x=0; x<w; x++, pixel+=3)
{
int r = pixel[0], g = pixel[1], b = pixel[2];
int key = r + (g<<8) + (b<<16);
int converted = cache[key];
if (converted == 255)
{
... find closest representative ...
cache[key] = converted;
}
pixel[0] = red[converted];
pixel[1] = green[converted];
pixel[2] = blue[converted];
}
}
If the numbers of colors is small you can use less memory. For example limiting the number of representatives to 15 you need just 4 bits per color entry (half the space) and something like the following would do it
std::vector<unsigned char> cache(256*256*256/2, 255);
...
int converted = (key&1) ? (cache[key>>1] >> 4) : (cache[key>>1] & 0x0F);
if (converted == 15) // Empty slot
{
...
cache[key>>1] ^= (key & 1) ? ((converted << 4)^0xF0) : (converted^0x0F);
}
...
If on the opposite you know that the number of possible input colors will be small and the number of representatives will be big then a standard std::map can be a valid alternative.
Why don't you use one of clustering methods (algorithms)? For example, k-means algorithm. Otherwise, google "image segmentation by colors."
If you want it to look good you'll want to use dithering, e.g. Floyd Steinberg dithering: http://en.wikipedia.org/wiki/Floyd%E2%80%93Steinberg_dithering
Related
I want a distance measure to find similarity between images.
What i have tried till now:
1) I have used low level distance metrics such as Normalized cross correlation (This retrieves similar images based on some threshold values) , but it cant retrieve images which are rotated or shifted. But if brightness of a particular image is reduced, the images are not retrieved even if they were of the same type.
2)Bhattacharya coefficient: It retrieves Rotated or shifted images but doesnot Detect images whose intensity(Brightness) is reduced.
3) Tried with global features like SURF which provide help for rotated(30 degrees) and transformed images , but no help for images with intensity difference.
What i need: I need a distance metric for image similarity which recognizes those images whose brightness are reduced an all images which are Transformed(rotated and shifted).
I want combination of these two metrics (Cross correlation) + (Bhattacharya Coefficient).
Will Mutual Information help me in this issue?? Or Can anyone Please suggest me a new metric For similarity measurement for this issue. Tried Googling with a wide issue and irrelevant answers. Can anyone guide me in here.Advance Thanks.
I implemented some mututal information and Kullback-Leibler distance to find similarity in Facades. It worked really well, how it works is explaind here:
Image-based Procedural Modeling of Facades
The whole steps are explained in the paper. But they are not for similarity of Images they are for the symmetrie of image parts. But maybe it works well also for Image comparison. Well it is just and idea maybe it works you should try. One think where i really see a problem is the rotation. I don't think this procedure is rotation invariant. Maybe you should look for some Visual Information Retrieval techniques, for your problem.
First you have to compute the mutual Information. For thate you create an accumulator array of the size of 256 x 256. Why that size? First for every gray color so the joint distribution and then for the marginal distribution.
for(int x = 0; x < width; x++)
for(int y = 0; y < height; y++)
{
int value1 = image1[y *width + x];
int value2 = image2[y * width + x];
//so first the joint distribution
distributionTable[value1][value2]++;
// and now the marginal distribution
distributionTable[value1][256]++;
distributionTable[256][value2]++;
}
Now you own the distribution table, and now you can compute the Kullback-Leibler distance.
for(int x = 0; x < width; x++)
for(int y = 0; y < height; y++)
{
int value1 = image1[y *width + x];
int value2= image2[y * width + x];
double ab = distributionTable[value1][value2] / size;
double a = distributionTable[value1][256] / size;
double b = distributionTable[256][value2] / size;
//Kullback-Leibler distance
sum += ab * Math.log(ab / (a * b));
}
A smaller sum says you that the similiarity/symmetrie between the two Images/Regions is very high. Should work well if the Image just have a brightness difference. Maybe there are other distances which are inveriant against rotation.
Maybe you shold try to to use SURF, SIFT or something like this. Then you can match the feature points. More higher the match results are so higher is the similarity. I think this is a better approach, because you don't have to care about scale, brightness and rotation difference. And it is also fast implemented with OpenCV
Can anyone suggest any links, ideas or algorithms to generate flowers randomly like the one as my profile pic? The profile pic flower has only a 10 x 10 grid and the algorithm is not truly random. I would also prefer that the new algorithm use a grid of about 500 x 500 or even better, allow the user to pick the size of the grid.
[Plant[][] is declared as int plant[10][10];]
public void generateSimpleSky(){
for(int w2=0;w2<10;w2++)
for(int w3=0;w3<10;w3++)
plant[w2][w3]=5;
}
public void generateSimpleSoil(){
for(int q=0;q<10;q++)
plant[q][9]=1;
}
public void generateSimpleStem(){
int ry=rand.nextInt(4);
plant[3+ry][8]=4;
xr=3+ry;
for(int u=7;u>1;u--){
int yu=rand.nextInt(3);
plant[xr-1+yu][u]=4;
xr=xr-1+yu;
}
}
public void generateSimpleFlower(){
plant[xr][2]=3;
for(int q2=1;q2<4;q2++)
if((2-q2)!=0)
plant[xr][q2]=2;
for(int q3=xr-1;q3<=xr+1;q3++)
if((xr-q3)!=0)
plant[q3][2]=2;
}
It sounds like a reasonably simple problem where you just generate 1 parameter at a time, possibly based on the output of the previous variables.
My model of a flower will be: It has just a reasonably upright stem, a perfectly round center, some amount of leaves on the stem on alternating sides, petals perfectly distributed around the center.
random() is just a random number within some chosen bounds, the bounds may be unique for each variable. random(x1, x2, ..., xn) generates a random number within some bounds dependent on the variables x1, x2, ..., xn (as in stemWidth < stemHeight/2, a reasonable assumption).
The Stem
stemXPosition = width / 2
stemHeight = random()
stemWidth = random(stemHeight)
stemColour = randomColour()
stemWidthVariationMax = random(stemWidth, stemHeight)
stemWidthVariationPerPixel = random(stemWidth, stemHeight)
stemWidthVariationMax/-PerPixel are for generating a stem that isn't perfectly straight (if you want to do something that complicated, a low PerPixel is for smoothness). Generate the stem using these as follows:
pixelRelative[y-position][0] := left x-position at that y-position relative to the stem
pixelRelative[y-position][1] := right x-position at that y-position relative to the stem
pixelRelative[0][0] = randomInRange(-stemWidthVariationMax, stemWidthVariationMax)
for each y > 0:
pixelRelative[y-1][0] = max(min(randomInRange(pixel[y] - stemWidthVariationPerPixel,
pixel[y] + stemWidthVariationPerPixel),
-stemWidthVariationMax),
stemWidthVariationMax)
//pixelRelative[0][1] and pixelRelative[y-1][1] generated same as pixelRelative[y-1][i]
for each y:
pixelAbsolute[y][0] = width / 2 - stemWidth / 2 + pixelRelative[y][0]
pixelAbsolute[y][1] = width / 2 + stemWidth / 2 + pixelRelative[y][1]
You can also use arcs to simplify things and go more than 1 pixel at a time.
The Top
centerRadius = random(stemHeight)
petalCount = random() // probably >= 3
petalSize = random(centerRadius, petalCount)
It's not too easy to generate the petals, you need to step from 0 to 2*PI with step-size of 2*PI/petalCount and generate arcs around the circle. It requires either a good graphics API or some decent maths.
Here's some nicely generated tops of flowers, though seemingly not open-source. Note that they don't have a center at all. (or centerRadius = 0)
The Leaves
You could probably write an entire paper on this, (like this one) but a simple idea would just be to generate a 1/2 circle and extend lines outward from there to meet at 2*the radius of the circle and to draw parallel lines on the flower.
Once you have a leaf generation algorithm:
leafSize = random(stemHeight) // either all leaves are the same size or generate the size for each randomly
leafStemLength = random(leafSize) // either all leaves have the same stem length or generate for each randomly
leafStemWidth = random(leafStemLength)
leaf[0].YPosition = random(stemHeight)
leaf[0].XSide = randomly either left or right
leaf[0].rotation = random between say 0 and 80 degrees
for each leaf i:
leaf[i].YPosition = random(stemHeight, leaf[i-1]) // only generate new leaves above previous leaves
leaf[i].XSide = opposite of leaf[i].XSide
Last words
The way to determine the bounds of each random would be either to argue it out, or give it some fixed value, generate everything else randomly a few times, keep increasing / decreasing it until it starts to look weird.
10 x 10 versus 500 x 500 would probably require greatly different algorithms, I wouldn't recommend the above for below 100 x 100, maybe generate a bigger image and simply shrink it using averaging or something.
Code
I started writing some Java code, when I realised it may take a bit longer than I would like to spend on this, so I'll show you what I have so far.
// some other code, including these functions to generate random numbers:
float nextFloat(float rangeStart, float rangeEnd);
int nextInt(int rangeStart, int rangeEnd);
...
// generates a color somewhere between green and brown
Color stemColor = Color.getHSBColor(nextFloat(0.1, 0.2), nextFloat(0.5, 1), nextFloat(0.2, 0.8));
int stemHeight = nextInt(height/2, 3*height/4);
int stemWidth = nextInt(height/20, height/20 + height/5);
Color flowerColor = ??? // I just couldn't use the same method as above to generate bright colors, but I'm sure it's not too difficult
int flowerRadius = nextInt(Math.min(stemHeight, height - stemHeight)/4, 3*Math.min(stemHeight, height - stemHeight)/4);
I know this is possible duplicated question.
Ruby, Generate a random hex color
My question is slightly different. I need to know, how to generate the random hex light colors only, not the dark.
In this thread colour lumincance is described with a formula of
(0.2126*r) + (0.7152*g) + (0.0722*b)
The same formula for luminance is given in wikipedia (and it is taken from this publication). It reflects the human perception, with green being the most "intensive" and blue the least.
Therefore, you can select r, g, b until the luminance value goes above the division between light and dark (255 to 0). For example:
lum, ary = 0, []
while lum < 128
ary = (1..3).collect {rand(256)}
lum = ary[0]*0.2126 + ary[1]*0.7152 + ary[2]*0.0722
end
Another article refers to brightness, being the arithmetic mean of r, g and b. Note that brightness is even more subjective, as a given target luminance can elicit different perceptions of brightness in different contexts (in particular, the surrounding colours can affect your perception).
All in all, it depends on which colours you consider "light".
Just some pointers:
Use HSL and generate the individual values randomly, but keeping L in the interval of your choosing. Then convert to RGB, if needed.
It's a bit harder than generating RGB with all components over a certain value (say 0x7f), but this is the way to go if you want the colors distributed evenly.
-- I found that 128 to 256 gives the lighter colors
Dim rand As New Random
Dim col As Color
col = Color.FromArgb(rand.Next(128, 256), rand.Next(128, 256), rand.Next(128, 256))
All colors where each of r, g ,b is greater than 0x7f
color = (0..2).map{"%0x" % (rand * 0x80 + 0x80)}.join
I modified one of the answers from the linked question (Daniel Spiewak's answer) to come up with something that is pretty flexible in terms of excluding darker colors:
floor = 22 # meaning darkest possible color is #222222
r = (rand(256-floor) + floor).to_s 16
g = (rand(256-floor) + floor).to_s 16
b = (rand(256-floor) + floor).to_s 16
[r,g,b].map {|h| h.rjust 2, '0'}.join
You can change the floor value to suit your needs. A higher value will limit the output to lighter colors, and a lower value will allow darker colors.
A really nice solution is provided by the color-generator gem, where you can call:
ColorGenerator.new(saturation: 0.75, lightness: 0.5).create_hex
I want to draw some items on screen, each item is in one of N sets. The number of sets changes all the time, so I need to calculate N different colours which are as different as possible (to make it easy to identify what is in which set).
So, for example with N = 2 my results would be black and white. With three I guess I would get all red, all green, all blue. For all four, it's less obvious what the correct answer is, and this is where I'm having trouble.
EDIT:: The obvious approach is to map 0 to Red, 1 to green, and all the colours in between to the appropriate rainbow colours, then you can get a colour for set N by doing GetRainbowColour(N / TotalSets), so a GetRainbowColour method is all that's need to solve this problem
You can read up on the HSL and HSV color models in this wikipedia article. The "H" in the acronymns stands for Hue, and that is the rainbow you want. It sounds like you want saturation to max out. The article also explains how to convert to RGB color.
Looks like a similar question has been asked before here.
The answer to this question is subjective - what is best contrast to someone with full vision is not necessarily best contrast to someone who is colour blind or has limited vision or someone with normal eyesight who is operating the equipment in a dark environment.
Physiologically, humans have much better resolution for intensity that for hue or saturation. That is why analogue TV, digital video and photo compression throw away colour information to reduce bandwidth (4:2:2) - if you put two pixels which are different intensities together, it doesn't matter what colour they are - we simply can only sense colour differences on large areas of like intensity.
So if the thing you are trying to display has lots of small areas of only a few pixels, or you want it to be used in the dark (in the dark the brain boosts the blue cells and we don't see colour as well) or by the 10% of the male population who are colour blind, consider using intensity as the main differentiating factor rather than hue. GetRainbowColour would ignore the most important dimension of the human visual sense, but can work quite well for continuous data.
Thanks, brainjam, for the suggestion to use HSL. I whipped up this little function that seems to work nicely for stacked graphs:
var contrastingColors = function(len) {
var result = [];
if (len > 0) {
var h = [0, 180]; // red, cyan
var shft = 360 / len;
var flip = 0;
var l = 50;
for (var ix = 0; ix < len; ix++) {
result.push("hsl(" + h[flip] + ",100%," + l + "%)");
h[flip] = (h[flip] + shft) % 360;
flip = flip ? 0 : 1;
if (flip == 0) {
l = (l == 50) ? 30 : (l == 30? 70 : 50);
}
}
}
return result;
};
Original Question
If you are given N maximally distant colors (and some associated distance metric), can you come up with a way to sort those colors into some order such that the first M are also reasonably close to being a maximally distinct set?
In other words, given a bunch of distinct colors, come up with an ordering so I can use as many colors as I need starting at the beginning and be reasonably assured that they are all distinct and that nearby colors are also very distinct (e.g., bluish red isn't next to reddish blue).
Randomizing is OK but certainly not optimal.
Clarification: Given some large and visually distinct set of colors (say 256, or 1024), I want to sort them such that when I use the first, say, 16 of them that I get a relatively visually distinct subset of colors. This is equivalent, roughly, to saying I want to sort this list of 1024 so that the closer individual colors are visually, the farther apart they are on the list.
This also sounds to me like some kind of resistance graph where you try to map out the path of least resistance. If you inverse the requirements, path of maximum resistance, it could perhaps be used to produce a set that from the start produces maximum difference as you go, and towards the end starts to go back to values closer to the others.
For instance, here's one way to perhaps do what you want.
Calculate the distance (ref your other post) from each color to all other colors
Sum the distances for each color, this gives you an indication for how far away this color is from all other colors in total
Order the list by distance, going down
This would, it seems, produce a list that starts with the color that is farthest away from all other colors, and then go down, colors towards the end of the list would be closer to other colors in general.
Edit: Reading your reply to my first post, about the spatial subdivision, would not exactly fit the above description, since colors close to other colors would fall to the bottom of the list, but let's say you have a cluster of colors somewhere, at least one of the colors from that cluster would be located near the start of the list, and it would be the one that generally was farthest away from all other colors in total. If that makes sense.
This problem is called color quantization, and has many well known algorithms: http://en.wikipedia.org/wiki/Color_quantization I know people who implemented the octree approach to good effect.
It seems perception is important to you, in that case you might want to consider working with a perceptual color space such as YUV, YCbCr or Lab. Everytime I've used those, they have given me much better results than sRGB alone.
Converting to and from sRGB can be a pain but in your case it could actually make the algorithm simpler and as a bonus it will mostly work for color blinds too!
N maximally distant colors can be considered a set of well-distributed points in a 3-dimensional (color) space. If you can generate them from a Halton sequence, then any prefix (the first M colors) also consists of well-distributed points.
If I'm understanding the question correctly, you wish to obtain the subset of M colours with the highest mean distance between colours, given some distance function d.
Put another way, considering the initial set of N colours as a large, undirected graph in which all colours are connected, you want to find the longest path that visits any M nodes.
Solving NP-complete graph problems is way beyond me I'm afraid, but you could try running a simple physical simulation:
Generate M random points in colour space
Calculate the distance between each point
Calculate repulsion vectors for each point that will move it away from all other points (using 1 / (distance ^ 2) as the magnitude of the vector)
Sum the repulsion vectors for each point
Update the position of each point according to the summed repulsion vectors
Constrain any out of bound coordinates (such as luminosity going negative or above one)
Repeat from step 2 until the points stabilise
For each point, select the nearest colour from the original set of N
It's far from efficient, but for small M it may be efficient enough, and it will give near optimal results.
If your colour distance function is simple, there may be a more deterministic way of generating the optimal subset.
Start with two lists. CandidateColors, which initially contains your distinct colors and SortedColors, which is initially empty.
Pick any color and remove it from CandidateColors and put it into SortedColors. This is the first color and will be the most common, so it's a good place to pick a color that jives well with your application.
For each color in CandidateColors calculate its total distance. The total distance is the sum of the distance from the CandidateColor to each of the colors in SortedColors.
Remove the color with the largest total distance from CandidateColors and add it to the end of SortedColors.
If CandidateColors is not empty, go back to step 3.
This greedy algorithm should give you good results.
You could just sort the candidate colors based on the maximum-distanced of the minimum-distance to any of the index colors.
Using Euclidean color distance:
public double colordistance(Color color0, Color color1) {
int c0 = color0.getRGB();
int c1 = color1.getRGB();
return distance(((c0>>16)&0xFF), ((c0>>8)&0xFF), (c0&0xFF), ((c1>>16)&0xFF), ((c1>>8)&0xFF), (c1&0xFF));
}
public double distance(int r1, int g1, int b1, int r2, int g2, int b2) {
int dr = (r1 - r2);
int dg = (g1 - g2);
int db = (b1 - b2);
return Math.sqrt(dr * dr + dg * dg + db * db);
}
Though you can replace it with anything you want. It just needs a color distance routine.
public void colordistancesort(Color[] candidateColors, Color[] indexColors) {
double current;
double distance[] = new double[candidateColors.length];
for (int j = 0; j < candidateColors.length; j++) {
distance[j] = -1;
for (int k = 0; k < indexColors.length; k++) {
current = colordistance(indexColors[k], candidateColors[j]);
if ((distance[j] == -1) || (current < distance[j])) {
distance[j] = current;
}
}
}
//just sorts.
for (int j = 0; j < candidateColors.length; j++) {
for (int k = j + 1; k < candidateColors.length; k++) {
if (distance[j] > distance[k]) {
double d = distance[k];
distance[k] = distance[j];
distance[j] = d;
Color m = candidateColors[k];
candidateColors[k] = candidateColors[j];
candidateColors[j] = m;
}
}
}
}
Do you mean that from a set of N colors, you need to pick M colors, where M < N, such that M is the best representation of the N colors in the M space?
As a better example, reduce a true-color (24 bit color space) to a 8-bit mapped color space (GIF?).
There are quantization algorithms for this, like the Adaptive Spatial Subdivision algorithm used by ImageMagic.
These algorithms usually don't just pick existing colors from the source space but creates new colors in the target space that most closely resemble the source colors. As a simplified example, if you have 3 colors in the original image where two are red (with different intensity or bluish tints etc.) and the third is blue, and need to reduce to two colors, the target image could have a red color that is some kind of average of the original two red + the blue color from the original image.
If you need something else then I didn't understand your question :)
You can split them in to RGB HEX format so that you can compare the R with R's of a different color, same with the G and B.
Same format as HTML
XX XX XX
RR GG BB
00 00 00 = black
ff ff ff = white
ff 00 00 = red
00 ff 00 = green
00 00 ff = blue
So the only thing you would need to decide is how close you want the colors and what is an acceptable difference for the segments to be considered different.