Generate RGB color set with highest diversity - algorithm

I'm trying to create an algorithm that will output a set of different RGB color values, that should be as distinct as possible.
For Example:
following a set of 3 colors:
(255, 0, 0) [Red]
(0, 255, 0) [Green]
(0, 0, 255) [Blue]
the next 3 colors would be:
(255, 255, 0) [Yellow]
(0, 255, 255) [Cyan]
(255, 0, 255) [Purple]
The next colors should be in-between the new intervals. Basically, my idea is to traverse the whole color spectrum systematic intervals similar to this:
A set of 13 colors should include the color in between 1 and 7, continue that pattern infinitely.
I'm currently struggling to apply this pattern to an algorithm to RGB values as it does not seem trivial to me. I'm thankful for any hints that can point me to a solution.

The Wikipedia article on color difference is worth reading, and so is the article on a “low-cost approximation” by CompuPhase linked therein. I will base my attempt on the latter.
You didn't specify a language, so I'll write it in not optimized Python (except for the integer optimizations already present in the reference article), in order for it to be readily translatable into other languages.
n_colors = 25
n_global_moves = 32
class Color:
max_weighted_square_distance = (((512 + 127) * 65025) >> 8) + 4 * 65025 + (((767 - 127) * 65025) >> 8)
def __init__(self, r, g, b):
self.r, self.g, self.b = r, g, b
def weighted_square_distance(self, other):
rm = (self.r + other.r) // 2 # integer division
dr = self.r - other.r
dg = self.g - other.g
db = self.b - other.b
return (((512 + rm) * dr*dr) >> 8) + 4 * dg*dg + (((767 - rm) * db*db) >> 8)
def min_weighted_square_distance(self, index, others):
min_wsd = self.max_weighted_square_distance
for i in range(0, len(others)):
if i != index:
wsd = self.weighted_square_distance(others[i])
if min_wsd > wsd:
min_wsd = wsd
return min_wsd
def is_valid(self):
return 0 <= self.r <= 255 and 0 <= self.g <= 255 and 0 <= self.b <= 255
def add(self, other):
return Color(self.r + other.r, self.g + other.g, self.b + other.b)
def __repr__(self):
return f"({self.r}, {self.g}, {self.b})"
colors = [Color(127, 127, 127) for i in range(0, n_colors)]
steps = [Color(dr, dg, db) for dr in [-1, 0, 1]
for dg in [-1, 0, 1]
for db in [-1, 0, 1] if dr or dg or db] # i.e., except 0,0,0
moved = True
global_move_phase = False
global_move_count = 0
while moved or global_move_phase:
moved = False
for index in range(0, len(colors)):
color = colors[index]
if global_move_phase:
best_min_wsd = -1
else:
best_min_wsd = color.min_weighted_square_distance(index, colors)
for step in steps:
new_color = color.add(step)
if new_color.is_valid():
new_min_wsd = new_color.min_weighted_square_distance(index, colors)
if best_min_wsd < new_min_wsd:
best_min_wsd = new_min_wsd
colors[index] = new_color
moved = True
if not moved:
if global_move_count < n_global_moves:
global_move_count += 1
global_move_phase = True
else:
global_move_phase = False
print(f"n_colors: {n_colors}")
print(f"n_global_moves: {n_global_moves}")
print(colors)
The colors are first set all to grey, i.e., put in the center of the RGB color cube, and then moved in the color cube in such a way as to hopefully maximize the minimum distance between colors.
To save CPU time the square of the distance is used instead of the distance itself, which would require the calculation of a square root.
Colors are moved one at a time, by a maximum of 1 in each of the 3 directions, to one of the adjacent colors that maximizes the minimum distance from the other colors. By so doing, the global minimum distance is (approximately) maximized.
The “global move” phases are needed in order to overcome situations where no color would move, but forcing all colors to move to a position which is not much worse than their current one causes the whole to find a better configuration with the subsequent regular moves. This can best be seen with 3 colors and no global moves, modifying the weighted square distance to be simply rd*rd+gd*gd+bd*bd: the configuration becomes
[(2, 0, 0), (0, 253, 255), (255, 255, 2)]
while, by adding 2 global moves, the configuration becomes the expected one
[(0, 0, 0), (0, 255, 255), (255, 255, 0)]
The algorithm produces just one of several possible solutions. Unfortunately, since the metric used is not Euclidean, it's not possible to simply flip the 3 dimension in the 8 possible combinations (i.e., replace r→255-r and/or the same for g and/or b) to get equivalent solutions. It's probably best to introduce randomness in the order the color moving steps are tried out, and vary the random seed.
I have not corrected for the monitor's gamma, because its purpose is precisely that of altering the spacing of the brightness in order to compensate for the eyes' different sensitivity at high and low brightness. Of course, the screen gamma curve deviates from the ideal, and a (system dependent!) modification of the gamma would yield better results, but the standard gamma is a good starting point.
This is the output of the algorithm for 25 colors:
Note that the first 8 colors (the bottom row and the first 3 colors of the row above) are close to the corners of the RGB cube (they are not at the corners because of the non-Euclidean metric).

First let me ask, do you want to remain in sRGB, and go through each RGB combination?
OR (and this is my assumption) do you actually want colors that are "farthest" from each other? Since you used the term "distinct" I'm going to cover finding color differences.
Model Your Perceptions
sRGB is a colorspace that refers to your display/output. And while the gamma curve is "sorta" perceptually uniform, the overall sRGB colorspace is not, it is intended more model the display than human perception.
To determine "maximum distance" between colors in terms of perception, you want a model of perception, either using a colorspace that is perceptually uniform or using a color appearance model (CAM).
As you just want sRGB values as a result, then using a uniform colorspace is probably sufficient, such as CIELAB or CIELUV. As these use cartesian coordinates, the difference between two colors in (L*a*b*) is simply the euclidian distance.
If you want to work with polar coordinates (i.e. hue angle) then you can go one step past CIELAB, into CIELCh.
How To Do It
I suggest Bruce Lindbloom's site for the relevant math.
The simplified steps:
Linearize the sRGB by removing the gamma curve from each of the three color channels.
Convert the linearized values into CIE XYZ (use D65, no adaptation)
Convert XYZ into L* a* b*
Find the Opposite:
a. Now find the "opposite" color by plotting an line through 0, making the line equal in distance from both sides of zero. OR
b. ALT: Do one more transform from LAB into CIELCh, then find the opposite by rotating hue 180 degrees. Then convert back to LAB.
Convert LAB to XYZ.
Convert XYZ to sRGB.
Add the sRGB gamma curve to back to each channel.
Staying in sRGB?
If you are less concerned about perceptual uniformity, then you could just stay in sRGB, though the results will be less accurate. In this case all you need to do is take the difference of each channel relative to 255 (i.e. invert each channel):
What's The Difference of the Differences?
Here are some comparisons of the two methods discussed above:
For starting color #0C5490 sRGB Difference Method:
Same starting color, but using CIELAB L* C* h* (and just rotating hue 180 degrees, no adjustment to L*).
Starting color #DFE217, sRGB difference method:
Here in CIELAB LCh, just rotating hue 180:
And again in LCh, but this time also adjusting L* as (100 - L*firstcolor)
Now you'll notice a lot of change in hue angle on these — the truth is that while LAB is "somewhat uniform," it's pretty squirrly in the area of blue.
Take a look at the numbers:
They seem substantially different for hue, chroma, a, b... yet they create the same HEX color value! So yes, even CIELAB has inaccuracies (especially blue).
If you want more accuracy, try CIECAM02

Related

Determine Dominant Color of Picture

I am trying to come up with an algorithm to determine the dominant color in an image (either taken from a devices camera or by selecting an existing photo in the photo library). I have written an iOS 8 application in Swift that can grab the RGB value of each pixel in the image, but I don't really know what to do from there.
For pixels that have a distinct dominant color, say RGB(230, 15, 30), it's pretty easy to determine the dominant color. However, I don't really know what to do for pixels that have RGB values where 2 of the 3 values are similar, say RGB(200, 215, 30).
My original thought was to keep 3 counters (one for each color) and add each pixels corresponding RGB values to that counter. At the end I would divide each counter by the total number of pixels and the max of the 3 values would be the dominant color. However, like I mentioned before, when the results are close to each other I can't say that one color necessarily dominates the other.
Just looking for some thoughts and suggestions
I came up with this problem a few weeks ago, and having read many posts talking about it, I found the best method is Hierarchical Quantization presented by this post: http://aishack.in/tutorials/dominant-color/. Also, I have implemented it in python: https://github.com/wenmin-wu/dominant-colors-py . You can install it with pip:pip install dominantcolors and use it as following:
from dominantcolors import get_image_dominant_colors
dominant_colors = get_image_dominant_colors(image_path='/path/to/image_path',num_colors=3)
An idea:
First step is to reduce the number of colors, for example "Color Quantization using K-Means". In the example from the link, the number of colors was reduced to 64 from 96K.
Second step is to calculate the ratio for every color and pick the biggest value.
You can check my hobby project to find the dominant color in a UIImage: https://github.com/ruuki/ColorFinder
What it does basically is creating clusters of colors of the image and returns the most dominant one in a completion block. You can tweak threshold parameters within the source code. Hope it helps.
i had a similar task to do, here is my python code:
import picamera
import picamera.array
import numpy as np
from math import sqrt, atan2, degrees
def get_colour_name(rgb):
rgb = rgb / 255
alpha = (2 * rgb[0] - rgb[1] - rgb [2])/2
beta = sqrt(3)/2*(rgb[1] - rgb[2])
hue = int(degrees(atan2(beta, alpha)))
std = np.std(rgb)
mean = np.mean(rgb)
if hue < 0:
hue = hue + 360
if std < 0.055:
if mean > 0.85:
colour = "white"
elif mean < 0.15:
colour = "black"
else:
colour = "grey"
elif (hue > 50) and (hue <= 160):
colour = "green"
elif (hue > 160) and (hue <= 250):
colour = "blue"
else:
colour = "red"
if DEBUG:
print rgb, hue, std, mean, colour
return str(int(hue)) + ": " + colour
def scan_colour:
with picamera.PiCamera() as camera:
with picamera.array.PiRGBArray(camera) as stream:
camera.start_preview()
camera.resolution = (100, 100)
for foo in camera.capture_continuous(stream, 'rgb', use_video_port=False, resize=None, splitter_port=0, burst=True):
stream.truncate()
stream.seek(0)
RGBavg = stream.array.mean(axis=0).mean(axis=0)
colour = get_colour_name(RGBavg)
print colour
scan_colour()
What i thought is to build the mean Color of all Pixels and to determine the Color out of the hue angle. For getting grayscale answers i wanted to check if the Color is near the middle line of the Color corpus.

converting rgb image to double and keeping the values in matlab

I have an image and I want to import this image to matlab. I am using the following code. The problem that I have is that when I convert the image to grayscale, everything will be changed and the converted image is not similar to original one. In another words, I want to keep the values (or let say the image) as it is in the original image. Is there any way for doing this?
I = imread('myimage.png');
figure, imagesc(I), axis equal tight xy
I2 = rgb2gray(I);
figure, imagesc(I2), axis equal tight xy
Your original image is already using a jet colormap. The problem is, when you convert it to grayscale, you lose some crucial information. See the image below.
In the original image you have a heatmap. Blue areas generally indicate "low value", whereas red areas indicate "high values". But when converted to grayscale, both areas indicate low value, as they aproach dark pixels (see the arrows).
A possible solution is this:
You take every pixel of your image, find the nearest (closest)
color value in the jet colormap and use its index as a gray value.
I will show you first the final code and the results. The explanation goes below:
I = im2double(imread('myimage.png'));
map = jet(256);
Irgb = reshape(I, size(I, 1) * size(I, 2), 3);
Igray = zeros(size(I, 1), size(I, 2), 'uint8');
for ii = 1:size(Irgb, 1)
[~, idx] = min(sum((bsxfun(#minus, Irgb(ii, :), map)) .^ 2, 2));
Igray(ii) = idx - 1;
end
clear Irgb;
subplot(2,1,1), imagesc(I), axis equal tight xy
subplot(2,1,2), imagesc(Igray), axis equal tight xy
Result:
>> whos I Igray
Name Size Bytes Class Attributes
I 110x339x3 894960 double
Igray 110x339 37290 uint8
Explanation:
First, you get the jet colormap, like this:
map = jet(256);
It will return a 256x3 colormap with the possible colors on the jet palette, where each row is a RGB pixel. map(1,:) would be kind of a dark blue, and map(256,:) would be kind of a dark red, as expected.
Then, you do this:
Irgb = reshape(I, size(I, 1) * size(I, 2), 3);
... to turn your 110x339x3 image into a 37290x3 matrix, where each row is a RGB pixel.
Now, for each pixel, you take the Euclidean distance of that pixel to the map pixels. You take the index of the nearest one and use it as a gray value. The minus one (-1) is because the index is in the range 1..256, but a gray value is in the range 0..255.
Note: the Euclidean distance takes a square root at the end, but since we are just trying to find the closest value, there is no need to do so.
EDIT:
Here is a 10x faster version of the code:
I = im2double(imread('myimage.png'));
map = jet(256);
[C, ~, IC] = unique(reshape(I, size(I, 1) * size(I, 2), 3), 'rows');
equiv = zeros(size(C, 1), 1, 'uint8');
for ii = 1:numel(equiv)
[~, idx] = min(sum((bsxfun(#minus, C(ii, :), map)) .^ 2, 2));
equiv(ii) = idx - 1;
end
Irgb = reshape(equiv(IC), size(I, 1), size(I, 2));
Irgb = Irgb(end:-1:1,:);
clear equiv C IC;
It runs faster because it exploits the fact that the colors on your image are restricted to the colors in the jet palette. Then, it counts the unique colors and only match them to the palette values. With fewer pixels to match, the algorithm runs much faster. Here are the times:
Before:
Elapsed time is 0.619049 seconds.
After:
Elapsed time is 0.061778 seconds.
In the second image, you're using the default colormap, i.e. jet. If you want grayscale, then try using colormap(gray).

Simplifying an image in matlab

I have a picture of a handwritten letter (say the letter, "y"). Keeping only the first of the three color values (since it is a grayscale image), I get a 111x81 matrix which I call aLetter. I can see this image (please ignore the title) using:
colormap gray; image(aLetter,'CDataMapping','scaled')
What I want is to remove the white space around this letter and somehow average the remaining pixels so that I have an 8x8 matrix (let's call it simpleALetter). Now if I use:
colormap gray; image(simpleALetter,'CDataMapping','scaled')
I should see a pixellated version of the letter:
Any advice on how to do this would be greatly appreciated!
You need several steps to achieve what you want (updated in the light of #rwong's observation that I had white and black flipped…):
Find the approximate 'bounding box' of the letter:
make sure that "text" is the highest value in the image
set things that are "not text" to zero - anything below a threshold
sum along row and column, find non-zero pixels
upsample the image in the bounding box to a multiple of 8
downsample to 8x8
Here is how you might do that with your situation
aLetter = max(aLetter(:)) - aLetter; % invert image: now white = close to zero
aLetter = aLetter - min(aLetter(:)); % make the smallest value zero
maxA = max(aLetter(:));
aLetter(aLetter < 0.1 * maxA) = 0; % thresholding; play with this to set "white" to zero
% find the bounding box:
rowsum = sum(aLetter, 1);
colsum = sum(aLetter, 2);
nonzeroH = find(rowsum);
nonzeroV = find(colsum);
smallerLetter = aLetter(nonzeroV(1):nonzeroV(end), nonzeroH(1):nonzeroH(end));
% now we have the box, but it's not 8x8 yet. Resampling:
sz = size(smallerLetter);
% first upsample in both X and Y by a factor 8:
bigLetter = repmat(reshape(smallerLetter, [1 sz(1) 1 sz(2)]), [8 1 8 1]);
% then reshape and sum so you end up with 8x8 in the final matrix:
letter8 = squeeze(sum(sum(reshape(bigLetter, [sz(1) 8 sz(2) 8]), 3), 1));
% finally, flip it back "the right way" black is black and white is white:
letter8 = 255 - (letter8 * 255 / max(letter8(:)));
You can do this with explicit for loops but it would be much slower.
You can also use some of the blockproc functions in Matlab but I am using Freemat tonight and it doesn't have those… Neither does it have any image processing toolbox functions, so this is "hard core".
As for picking a good threshold: if you know that > 90% of your image is "white", you could determine the correct threshold by sorting the pixels and finding the threshold dynamically - as I mentioned in my comment in the code "play with it" until you find something that works in your situation.

Algorithm to make overly bright (HDR) colours become white?

You know how every colour eventually turns white in an image if it's bright enough or sufficiently over-exposed? I'm trying to figure out a function to do this to apply to generated HDR images, in a realistic and pleasing looking way (using idealised camera performance as a reference I guess).
The problem the algorithm/function I want to obtain should solve is, let's say you have an orange pixel with the (linear RGB) values {1.0, 0.2, 0.0}. Everything is fine if you multiply each value by a factor of 1.0 or less, but let's say you multiply that pixel by 6, now you get {6.0, 1.2, 0.0}, what do you do with your out of range red and green value of 6.0 and 1.2? You could clip them which would give you {1.0, 1.0, 0.0}, which sadly is what Photoshop and 3DS Max seem to do, but it looks so very wrong as now your formerly orange pixel is yellow (so if you start with any saturated hue (meaning at least one channel is 0.0) you always end up with either magenta, yellow or cyan) and it will never become white.
I considered taking half of the excess of one channel and splitting it equally between the other channels, so for example {1.6, 0.5, 0.1} would become {1.0, 0.8, 0.4} but it's too simplistic and not very realistic. I strongly doubt that an acceptable solution could be anywhere near this trivial.
I'm sure there must have been research done on the topic, but I cannot find any relevant literature and sensitometry doesn't seem to be quite what I'm looking for.
Modifying the Python code I left in an answer on another question to work in the range [0.0-1.0]:
def redistribute_rgb(r, g, b):
threshold = 1.0
m = max(r, g, b)
if m <= threshold:
return r, g, b
total = r + g + b
if total >= 3 * threshold:
return threshold, threshold, threshold
x = (3 * threshold - total) / (3 * m - total)
gray = threshold - x * m
return gray + x * r, gray + x * g, gray + x * b
This should return acceptable results in either a linear or gamma-corrected color space, although linear will be better.
Multiplying each r,g,b value by the same amount retains their original proportions and thus the hue, up to the point where x=0 and you've achieved white. You've expressed interest in a non-linear response once clipping starts, but I'm not entirely sure how to work that in. The math was carefully chosen so that at least one of the returned values will be at the threshold, and none will be above.
Running this on your example of (1.6, 0.5, 0.1) returns (1.0, 0.6615, 0.5385).
I've found a way to do it based on Mark Ransom's suggestion with a twist. When the colour is out of gamut we compute the grey colour of equivalent perceptual luminosity then we linearly interpolate between the out-of-gamut input colour and that grey value to find the first in-gamut colour. Weighting each RGB channel to get the perceptual luminosity part is the tricky part seeing as the most commonly used formula from CIELab L = 0.2126*red + 0.7152*green + 0.0722*blue is quite blatantly wrong as it makes the blue way too bright. Instead I did some tests and chose the weights which looked the most correct to me, though these are not definite and you might want to tweak them, although for this particular problem this is perhaps not too crucial.
Or in fewer words the solution is to desaturate the out-of-gamut colour just enough that it might be in-gamut.
Here is my solution in C code. All variables are in floating point format.
Wr=0.125; Wg=0.68; Wb=0.195; // these are the weights for each colour
max = MAXN(MAXN(red, grn), blu); // max is the maximum value of the 3 colours
if (max > 1.) // if the colour is out of gamut
{
L = Wr*red + Wg*grn + Wb*blu; // Luminosity of the colour's grey point
if (L < 1.) // if the grey point is no brighter than white
{
// t represents the ratio on the line between the input colour
// and its corresponding grey point. t is between 0 and 1,
// a lower t meaning closer to the grey point and a
// higher t meaning closer to the input colour
t = (1.-L) / (max-L);
// a simple linear interpolation between the
// input colour and its grey point
red = red*t + L*(1.-t);
grn = grn*t + L*(1.-t);
blu = blu*t + L*(1.-t);
}
else // if it's too bright regardless of saturation
{
red = grn = blu = 1.;
}
}
Here's what it looks like with a linear orange gradient:
It does not use anything like arbitrary gamma which is good, the only mostly arbitrary thing has to do with the Luminosity weights, but I guess those are quite necessary.
You have to map it to some non-linear scale. For example: http://en.wikipedia.org/wiki/Gamma_correction .
Ex: Let y = f(x) = log(1+x) - log(1-x) define the "actual" luminescence.
The reverse function is x = g(y) = (e^y-1)/(e^y+1).
now, you have values x=1 and x=0.2. For the first case the corresponding y is infinity. Six times the infinity is still infinity. If you use function g, you get new x_new = 1.
For x=0.2, y = 0.4054651. After multiplying by 6, y_new = 2.432791 . The corresponding x_new = 0.8385876.
For x=0, x_new will still be 0 (I will leave the calculations to you).
So starting from (1.0, 0.2, 0.0) your new set of points are (1.0, 0.8385876, 0.0).
This is one example of mapping function. There are infinite number of them. Choose one that looks best to you.

Ruby, Generate a random hex color (only light colors)

I know this is possible duplicated question.
Ruby, Generate a random hex color
My question is slightly different. I need to know, how to generate the random hex light colors only, not the dark.
In this thread colour lumincance is described with a formula of
(0.2126*r) + (0.7152*g) + (0.0722*b)
The same formula for luminance is given in wikipedia (and it is taken from this publication). It reflects the human perception, with green being the most "intensive" and blue the least.
Therefore, you can select r, g, b until the luminance value goes above the division between light and dark (255 to 0). For example:
lum, ary = 0, []
while lum < 128
ary = (1..3).collect {rand(256)}
lum = ary[0]*0.2126 + ary[1]*0.7152 + ary[2]*0.0722
end
Another article refers to brightness, being the arithmetic mean of r, g and b. Note that brightness is even more subjective, as a given target luminance can elicit different perceptions of brightness in different contexts (in particular, the surrounding colours can affect your perception).
All in all, it depends on which colours you consider "light".
Just some pointers:
Use HSL and generate the individual values randomly, but keeping L in the interval of your choosing. Then convert to RGB, if needed.
It's a bit harder than generating RGB with all components over a certain value (say 0x7f), but this is the way to go if you want the colors distributed evenly.
-- I found that 128 to 256 gives the lighter colors
Dim rand As New Random
Dim col As Color
col = Color.FromArgb(rand.Next(128, 256), rand.Next(128, 256), rand.Next(128, 256))
All colors where each of r, g ,b is greater than 0x7f
color = (0..2).map{"%0x" % (rand * 0x80 + 0x80)}.join
I modified one of the answers from the linked question (Daniel Spiewak's answer) to come up with something that is pretty flexible in terms of excluding darker colors:
floor = 22 # meaning darkest possible color is #222222
r = (rand(256-floor) + floor).to_s 16
g = (rand(256-floor) + floor).to_s 16
b = (rand(256-floor) + floor).to_s 16
[r,g,b].map {|h| h.rjust 2, '0'}.join
You can change the floor value to suit your needs. A higher value will limit the output to lighter colors, and a lower value will allow darker colors.
A really nice solution is provided by the color-generator gem, where you can call:
ColorGenerator.new(saturation: 0.75, lightness: 0.5).create_hex

Resources