RGB Similar Color Approximation Algorithm - algorithm

Given that in RGB we can represent 256^3 combinations = 16,777,216 colors, and since the human eye can only distinguish roughly 10,000,000, there is obviously a surplus of 6,777,216 RGB combinations that chromatically are indistinguishable from counterpart colors.
Compression algorithms work on this basis when approximating out spacial difference in color ranges across a frame I believe. With that in mind, how can one reliably compute whether a given color is within a range of 'similarity' to another?
Of course, 'similarity' will be some kind of arbitrary/tunable parameter that can be tweaked, but this is an approximation anyway. So any pointers, pseudocode, intuitive code samples, resources out there to help me model such a function?
Many thanks for your help

There are many ways of computing distances between colors, the simplest ones being defined on color components in any color space. These are common "distances" or metrics between RGB colors (r1,g1,b1) and (r2,g2,b2):
L1: abs(r1-r2) + abs(g1-g2) + abs(b1-b2)
L2: sqrt((r1-r2)² + (g1-g2)² + (b1-b2)²)
L∞: max(abs(r1-r2), abs(g1-g2), abs(b1-b2))
These however don't take into account the fact that human vision is less sensitive to color than to brightness. For optimal results you should convert from RGB to a color space that encodes brightness and color separately. Then use one of the above metrics in the new color space, possibly giving more weight to the brightness component and less to the color components.
Areas of color that are indistinguishable form each other are called MacAdam ellipses. The ellipses become nearly circular in the CIELUV and CIELAB color spaces, which is great for computation, but unfortunately going from RGB into these color spaces is not so simple.
JPEG converts colors into YCbCr, where Y is brightness and the two C's encode color, and then halves the resolution of the C components. You could do the same and then use a weighed version of one of the above metrics, for example:
diff = sqrt(1.4*sqr(y1-y2) + .8*sqr(cb1-cb2) + .8*sqr(cr1-cr2))
The article on color difference in wikipedia has more examples for different color spaces.

Perceptual color difference can be calculated using the The CIEDE2000 Color-Difference Formula. The CIEDE2000 formula is based on the LCH color space (Luminosity, Chroma, and Hue). LCH color space is represented as a cylinder (see image here).
A less accurate (but more manageable) model, is the CIE76 Color-Difference formula, which is based on the Lab color space ( L*a*b*). There are no simple formulas for conversion between RGB or CMYK values and L*a*b*, because the RGB and CMYK color models are device dependent. The RGB or CMYK values first need to be transformed to a specific absolute color space, such as sRGB or Adobe RGB. This adjustment will be device dependent, but the resulting data from the transform will be device independent, allowing data to be transformed to the CIE 1931 color space and then transformed into L*a*b*. This article explains the procedure and the formulas.

RGB color system is designed such that if 2 colors have values that are close to each other then the colors are also perceptually close.
Example:
color defined by RGB = (100, 100, 100) is perceptually almost the same as colors
RGB = (101, 101, 100), RGB = (98, 100, 99) etc...

Related

How do colour spaces manage to represent different sized sections of the visible colour space?

I recently watched this YouTube video (link: https://youtu.be/iXKvwPjCGnY) that talks about colour spaces. Interested I looked it up. Turns out different colour spaces can represent different "subsets" of the visible spectrum. Not all of these subsets are the same size. What I don't understand is how this is achieved. As long as the same number of bytes are used to represent each pixel there are only so many permutations regardless of encoding. Therefore a fixed number of distinct colors. Now I do not understand color spaces. Maybe they do use different numbers of bytes. I tried looking it up but most articles were too obscure and jargony especially Wikipedia. Maybe someone can help me out here?
You are confusing gamut and bit depth.
Gamut represents the range of color that can be represented by a color-space.
Bit depth represents the precision with which you can define a color within a gamut.
So, if gamut was analogous to the size of a display, bit depth would correspond to the resolution of that display. You can have small displays with very high resolution and inversely, they are not dependent upon one an other.
This also means that a color-space with a bigger gamut, for the same bit depth, will display colors that look further apart than if they were in a smaller gamut.
You can see this effect in the following images from the Wikipedia page for color depth (synonym of bit depth) though, here, the gamut (sRGB color gamut) stays constant but the bit depth gets lower:
24-bit color depth vs 4-bit color depth
You can see the colors in the 4-bit variant are as colorful but there are a lot less of them that can be represented compared to the 24-bit variant.
Gamut, if viewed on a 2D surface, represents the area and bit depth represents how many colors are in this area. The more colors there are the smaller the distance between two colors but it's also worth noting that those distances don't need to be linear, you can have higher densities in different places depending on the color space specifications. sRGB, for example, is gamma compressed and so has a higher density of represented colors closer to black than to white.
Also, you said
[...] different colour spaces can represent different "subsets" of the visible spectrum.
which isn't really correct. There is nothing stopping a color-space from defining colors that fall outside of the set of colors we can see. In the CIELAB color-space, for example, it is possible to get a color that would be extremely red, redder than you could see, while at the same time having no lightness whatsoever.

Calculate contrast of a color image(RGB)

In black and white image,we can easily calculate the contrast by (total no. of white pixels - total no. of black pixels).
How can I calculate this for color(RGB) image?
Any idea will be appreciated?
You may use the standard Deviation of the grayscale image as measure for the contrast. This is called "RMS contrast". See https://en.wikipedia.org/wiki/Contrast_(vision)#RMS_contrast for Details.
Contrast is defined as the difference between the highest and lowest intensity value of the image. So you can easily calculate it from the respective histogram.
Example: If you have a plain white image, the lowest and highest value are both 255, thus the contrast is 255-255=0. If you have an image with only black (0) and white (255) you have a contrast of 255, the highest possible value.
The same method can be applied for color images if you calculate the luminescence of each pixel (and thus convert the image to greyscale). There are several different methods to convert images to greyscale, you can chose one you like.
To make the approach more sophisticated, it is advisable to ignore a certain percentage of pixels to account for outliers (otherwise a single white and black pixel would lead to "full contrast", regardless of all other pixels). Another approach would be to take the number of dark and light pixels into account, as described by #Yves Daoust. This approach has the flaw that one has to set an arbitrary threshold to determine which pixels count as dark/light (usually 127).
This does not have a single answer. One idea I can think of is to operate on each of the three channels separately, Red, Green and Blue. Compute the histogram of each channel and operate on it.
A simple google search resulted in many relevant algorithms, one of them that I have used is Root Mean Square (standard deviation of the pixel intensities).

Using CIELab color space for color reduction algorithm with dithering

I've written a program for converting images to a custom file format that uses a limited color palette of 30 specific colours.
In my application I have given the option of working in RGB or YUV color spaces and the option of: Sierra, Jarvis or Floyd-Steinberg dithering.
However I have noticed that photoshop's save to web feature with the use of color tables to limit the color palette does a much better job than my program.
So I would like to improve my application to give better results.
Right now, with the example of Floyd-Steinberg dithering I'm essentially using this pseudo code
for each y from top to bottom
for each x from left to right
oldpixel := pixel[x][y]
newpixel := find_closest_palette_color(oldpixel)
pixel[x][y] := newpixel
quant_error := oldpixel - newpixel
pixel[x+1][y ] := pixel[x+1][y ] + quant_error * 7/16
pixel[x-1][y+1] := pixel[x-1][y+1] + quant_error * 3/16
pixel[x ][y+1] := pixel[x ][y+1] + quant_error * 5/16
pixel[x+1][y+1] := pixel[x+1][y+1] + quant_error * 1/16
My pixels are stored in RGB format and to find the closest palette color I am using the Euclidean distance in RGB/YUV.
I have been reading about the CIE94 and CIEDE2000 color difference algorithms and these should work better for my "find_closest_palette_color" function.
To do these calculations I'll have to convert from RGB to the CIELab color space. Can I also use CIELab when distributing errors in my dither algorithms by:
Converting the whole image to the CIELab color space
For each pixel find the closest color in my palette using CIE94 or CIEDE2000
Calculate the error in the CIELab color space (L* , a*, b* instead of RGB).
Distribute the error in accordance with whatever dither algorithm I am using with the same weights I was using in RGB.
Yes, in fact Lab is much better suited for this purpose, because Euclidean distance between colors in Lab reflect the human perception distance between the colors, whereas the distance in RGB does not.
Converting the whole image to the CIELab color space with cache.
Clustering with pixels of source image to find the best palette using a ratio of CIE76 and CIEDE2000.
Calculate the error in the CIELab color space (YUV instead of RGB).
Mix and match the error with the aid of Blue noise distribution.
Use Generalized Hilbert ("gilbert") space-filling curve O(n) instead
of Floyd-Steinberg dithering O(n2) to diffuse errors by minimize the
MSE in RGB.
Java implementation:
https://github.com/mcychan/nQuant.j2se/

Value as colour representation

Converting a value to a colour is well known, I do understand the following two approaches (very well described in changing rgb color values to represent a value)
Value as shades of grey
Value as brightness of a base colour (e.g. brightness of blue)
But what is the best algorithm when I want to use the full colour range ("all colours"). When I use "greys" with 8bit RGB values, I actually do have a representation of 256 shades (white to black). But if I use the whole range, I could use more shades. Something like this. Also this would be easier to recognize.
Basically I need the algorithm in Javascript, but I guess all code such as C#, Java, pseudo code would do as well. The legend at the bottom shows the encoding, and I am looking for the algorithm for this.
So having a range of values(e.g. 1-1000), I could represent 1 as white and 1000 as black, but I could also represent 1 as yellow and 1000 as blue. But is there a standard algorithm for this? Looking at the example here, it is shown that they use colour intervals. I do not only want to use greys or change the brightness, but use all colours.
This is a visual demonstration (Flash required). Given values a represented in a color scheme, my goal is to calculate the colours.
I do have a linear colour range, e.g. from 1-30000
-- Update --
Here I found that here is something called a LabSpace:
Lab space is a way of representing colours where points that are close to each other are those that look similar to each other to humans.
So what I would need is an algorithm to represent the linear values in this lab space.
There are two basic ways to specify colors. One is a pre-defined list of colors (a palette) and then your color value is an index into this list. This is how old 8-bit color systems worked, and how GIF images still work. There are lists of web-safe colors, eg http://en.wikipedia.org/wiki/Web_colors, that typically fit into an 8-bit value. Often similar colors are adjacent, but sometimes not.
A palette has the advantage of requiring a small amount of data per pixel, but the disadvantage that you're limited in the number of different colors that can be on the screen at the same time.
The other basic way is to specify the coordinates of a color. One way is RGB, with a separate value for each primary color. Another is Hue/Saturation/Luminance. CMYK (Cyan, Magenta, Yellow and sometimes blacK) is used for print. This is what's typically referred to as true color and when you use a phrase like "all colors" it sounds like you're looking for a solution like this. For gradients and such HSL might be a perfect fit for you. For example, a gradient from a color to grey simply reduces the saturation value. If all you want are "pure" colors, then fix the saturation and luminance values and vary the hue.
Nearly all drawing systems require RGB, but the conversion from HSL to RGB is straight forward. http://en.wikipedia.org/wiki/HSL_and_HSV
If you can't spare the full 24 bits per color (8 bits per color, 32-bit color is the same but adds a transparency channel) you can use 15 or 16 bit color. It's the same thing, but instead of 8 bits per color you get 5 each (15 bit) or 5-6-5 (16 bit, green gets the extra bit because our eyes are more sensitive to shades of green). That fits into a short integer.
It depends on the purposes of your datasets.
For example, you can assign a color to each range of values (0-100 - red, 100-200 - green, 200-300 - blue) by changing the brightness within the range.
Horst,
The example you gave does not create gradients. Instead, they use N preset colors from an array and pick the next color as umbr points out. Something like this:
a = { "#ffffff", "#ff00ff", "#ff0000", "#888888", ... };
c = a[pos / 1000];
were pos is your value from 1 to 30,000 and c is the color you want to use. (you'd need to better define the index than pos / 1000 for this to work right in all situations.)
If you want a gradient effect, you can just use the simple math shown on the other answer you pointed out, although if you want to do that with any number of points, it has to be done with triangles. You'll have a lot of work to determine the triangles and properly define every point.
In JavaScript, it will be dog slow. (with OpenGL it would be instantaneous and you would not even have to compute the gradients, and that would be "faster than realtime.")
What you need is a transfer function.
given a float number, a transfer function can generate a color.
see this:
http://http.developer.nvidia.com/GPUGems/gpugems_ch39.html
and this:
http://graphicsrunner.blogspot.com/2009/01/volume-rendering-102-transfer-functions.html
the second article says that the isovalue is between [0,255]. But it doesn't have to be in that range.
Normally, we scale any float number to the [0,1] range, and apply transfer function to get the color value.

Is there any way to divide rgb color palette?

I'm trying to generate a color palette which has 16 colors.
i will display this palette in 4x4 grid.
so i have to find a way to rgb color palette which has
255*255*255 colors divided to 16 colors equally and logically.
i think it's gonna be a mathematical algorithm.
because i'm tring to pick 16 vectors from
3x3 matrix which picked in equal scale.
actually i have found a way depends on this "dividing color palette" problem.
i will use this color values with converting rgb values to hsv values.
hue, saturation, value
so i can use one integer value between 0-360 or i can use one integer between 0-100 (%) for my color palette.
finally, i can easily use this values for searhing/filtering my data based on color selection. i'm diving 0-360 range to 16 pices equally, so i can easily define 16 different colors.
but thanks for different approaches
You are basically projecting a cube (R X G X B) onto a square (4 X 4). First, I would start by asking myself what size cube fits inside that square.
1 X 1 X 1 = 1
2 X 2 X 2 = 8
3 X 3 X 3 = 27
The largest cube that fits in the square has 8 colors. At that point, I would note how conveniently 8 is an integral factor of 16.
I think the convenience would tempt me to use 8 basic colors in 2 variants like light and dark or saturated and unsaturated.
You can approach this as a purely mathematical equipartition problem, but then it isn't really about color.
If you are trying to equipartition a color palette in a way that is meaningful to human perception, there are a large number of non-linearities that need to be taken into account which this article only mentions. For example, the colors #fffffe, #fffeff, and #feffff occupy far corners of the mathematical space, but are nearly indistinguishable to the human eye.
When the number of selected colors (16) is so small (especially compared to the number of available colors), you'll be much better off hand-picking the good-looking palette or using a standard one (like some pre-defined system or Web palette for 16 color systems) instead of trying to invent a mathematical algorithm for selecting the palette.
A lot depends on what the colors are for. I you just want 16 somewhat arbitrary colors, I would suggest:
black darkgray lightgray white
darkred darkgreen darkblue darkyellow
medred medgreen medblue medyellow
lightred lightgreen lightblue lightyellow
I used that color set for a somewhat cartoonish-colored game (VGA) and found it worked pretty well. I think I sequenced the colors a little differently, but the sequence above would seem logical if arranged in a 4x4 square.
This is a standard problem and known as color quantization.
There are a couple of algorithms for this:
Objective: You basically want to make 16 clusters of your pixel in a 3 dimension space where the 3 axes varies from 0 to 255.
Methods are:
1) rounding of first significant bits. -- very easy to implement but does not give good result.
2) histogram method. - take median effort and give better result
3) quad tree. - state of the art data structure. Give best result but implementing the qaud tree data structure is hard.
There might be some more algorithms. But I have used these 3.
Start with the color as an integer for obvious math (or start with hex if you can think in base 16). Add to the color the number for each desired sample. Convert the color integer to hex, and then split the hex to RGB. In this code example the last color will be within the number of divisions to hex white (0xffffff).
# calculate color sample sizes
divisions = 16 # number of desired color samples
total_colors = 256**3-1
color_samples = int((total_colors) / divisions)
print('{0:,} colors in {1:,} parts requires {2:,} per step'.format(total_colors, divisions , color_samples))
# loop to print results
ii = 0
for io in range(0,total_colors,color_samples):
hex_color = '{0:0>6}'.format(hex(io)[2:])
rc = hex_color[0:2] # red
gc = hex_color[2:4] # blue
bc = hex_color[4:6] # green
print('{2:>5,} - {0:>10,} in hex {1} | '.format(io, hex_color, ii), end='')
print('r-{0} g-{1} b-{2} | '.format(rc, gc, bc), end='')
print('r-{0:0>3} g-{1:0>3} b-{2:0>3}'.format(int(rc,16), int(gc,16), int(bc,16)))
ii +=1

Resources