Color mapping for 16-bit greyscale values with smooth transitions - colormap

I have 16-bits of precision for my data. Trying to visualize this data in greyscale 8-bit leads to significant banding. My thought was that I could store these 16 bit values in colors. For example, the first 8 bits could be in the red channel, and the next 8 bits in the green channel.
However, this will lead to abrupt transitions. Going from 65280 (255,0,0) to 65279 (254,255,0) would cause the color to shift immediately from Red to Yellow. This is really not helpful to human interpretation.
I know that HSB could provide 360 degrees of variation pretty easily, but 360 color values isn't really that much more than the 256 I already have with 8-but greyscale.
What would be the most appropriate way to represent values between 0-65535 in terms of colors that transition in a way that make sense to humans?

It's not clear to me why you are counting the RGB values in a way that (255, 0, 0) follows (254, 255, 0). It seem like it should proceed in some variant of:
(0, 0, 0) -> (254, 0, 0) -> (254, 1, 0) -> (254, 254, 0) -> (254, 254, 1) -> (254, 254, 254).
This would represent a transition from black to white through red, orange and yellow without any abrupt changes. Because of the non-linear way our eyes work it may not be perceived as perfectly smooth, but you should be able to do it without significant banding. For example:
let canvas = document.getElementById('canvas')
var ctx = canvas.getContext('2d');
let rgb = [0, 0, 0]
// fill channels fromgre
for (let j = 0; j < 3; j++){
for (let i = 0; i < 254; i++) {
rgb[j]++
ctx.fillStyle = `rgba(${rgb.join(",")}, 1)`
ctx.fillRect(i +254*j, 0, 1, 50);
}
}
<canvas id="canvas" width="765"></canvas>
Or through blue by starting at the other end:
let canvas = document.getElementById('canvas')
var ctx = canvas.getContext('2d');
let rgb = [0, 0, 0]
// fill channels fromgre
for (let j = 0; j < 3; j++){
for (let i = 0; i < 254; i++) {
rgb[2-j]++
ctx.fillStyle = `rgba(${rgb.join(",")}, 1)`
ctx.fillRect(i +254*j, 0, 1, 50);
}
}
<canvas id="canvas" width="765"></canvas>

It is impossible to do it, if you want to keep one property: similar numbers should have similar colour, or better the inverse: very different numbers should never give similar colours.
But if you wrote about banding, it could seems that data are gradient like, so eyes could me more forgiven if two different "number zone" have similar colours, because of context of nearby colours.
I would not go to full 16 bits (which would be too much, and we cannot see all RGB colour distinctly, and few people have really good monitors). Possibly you may build an interface, which can zoom in, and add more contrast on restricted range.
For less bits, you can start with blue-green (0,x,x) going up [x from 0 to 255], then moving to green (0,255,x) [from 255 to 0], then to black (0, x, 0), from 255 to 0, then to yellow (x,x,0), then to red (255,0,0), then to black then to violet (x,0,x), then to blue, then to black, and finally to white (x,x,x).
So 10 scales, which give us just more then 3 additional bit of information.
I would probably never go to black (as you see, it is repeated), but just x until 5 or 10, which will reduce the number of bits. But so I think you still have 11 bit of information, and scale are distinguishable by human eyes.
In this example I used the 6 primaries+secondary colours (+greyscale), so they are easy distinguishable. In theory one could use additional intermediate colours, but I think it would confuse much more our eyes.
I would offer an additional input to shift up and down the original data numbers, so that user could check in more details (and getting more bits) some regions. But this work nicely if there are not many extreme changes between nearby numbers,

Related

Generate RGB color set with highest diversity

I'm trying to create an algorithm that will output a set of different RGB color values, that should be as distinct as possible.
For Example:
following a set of 3 colors:
(255, 0, 0) [Red]
(0, 255, 0) [Green]
(0, 0, 255) [Blue]
the next 3 colors would be:
(255, 255, 0) [Yellow]
(0, 255, 255) [Cyan]
(255, 0, 255) [Purple]
The next colors should be in-between the new intervals. Basically, my idea is to traverse the whole color spectrum systematic intervals similar to this:
A set of 13 colors should include the color in between 1 and 7, continue that pattern infinitely.
I'm currently struggling to apply this pattern to an algorithm to RGB values as it does not seem trivial to me. I'm thankful for any hints that can point me to a solution.
The Wikipedia article on color difference is worth reading, and so is the article on a “low-cost approximation” by CompuPhase linked therein. I will base my attempt on the latter.
You didn't specify a language, so I'll write it in not optimized Python (except for the integer optimizations already present in the reference article), in order for it to be readily translatable into other languages.
n_colors = 25
n_global_moves = 32
class Color:
max_weighted_square_distance = (((512 + 127) * 65025) >> 8) + 4 * 65025 + (((767 - 127) * 65025) >> 8)
def __init__(self, r, g, b):
self.r, self.g, self.b = r, g, b
def weighted_square_distance(self, other):
rm = (self.r + other.r) // 2 # integer division
dr = self.r - other.r
dg = self.g - other.g
db = self.b - other.b
return (((512 + rm) * dr*dr) >> 8) + 4 * dg*dg + (((767 - rm) * db*db) >> 8)
def min_weighted_square_distance(self, index, others):
min_wsd = self.max_weighted_square_distance
for i in range(0, len(others)):
if i != index:
wsd = self.weighted_square_distance(others[i])
if min_wsd > wsd:
min_wsd = wsd
return min_wsd
def is_valid(self):
return 0 <= self.r <= 255 and 0 <= self.g <= 255 and 0 <= self.b <= 255
def add(self, other):
return Color(self.r + other.r, self.g + other.g, self.b + other.b)
def __repr__(self):
return f"({self.r}, {self.g}, {self.b})"
colors = [Color(127, 127, 127) for i in range(0, n_colors)]
steps = [Color(dr, dg, db) for dr in [-1, 0, 1]
for dg in [-1, 0, 1]
for db in [-1, 0, 1] if dr or dg or db] # i.e., except 0,0,0
moved = True
global_move_phase = False
global_move_count = 0
while moved or global_move_phase:
moved = False
for index in range(0, len(colors)):
color = colors[index]
if global_move_phase:
best_min_wsd = -1
else:
best_min_wsd = color.min_weighted_square_distance(index, colors)
for step in steps:
new_color = color.add(step)
if new_color.is_valid():
new_min_wsd = new_color.min_weighted_square_distance(index, colors)
if best_min_wsd < new_min_wsd:
best_min_wsd = new_min_wsd
colors[index] = new_color
moved = True
if not moved:
if global_move_count < n_global_moves:
global_move_count += 1
global_move_phase = True
else:
global_move_phase = False
print(f"n_colors: {n_colors}")
print(f"n_global_moves: {n_global_moves}")
print(colors)
The colors are first set all to grey, i.e., put in the center of the RGB color cube, and then moved in the color cube in such a way as to hopefully maximize the minimum distance between colors.
To save CPU time the square of the distance is used instead of the distance itself, which would require the calculation of a square root.
Colors are moved one at a time, by a maximum of 1 in each of the 3 directions, to one of the adjacent colors that maximizes the minimum distance from the other colors. By so doing, the global minimum distance is (approximately) maximized.
The “global move” phases are needed in order to overcome situations where no color would move, but forcing all colors to move to a position which is not much worse than their current one causes the whole to find a better configuration with the subsequent regular moves. This can best be seen with 3 colors and no global moves, modifying the weighted square distance to be simply rd*rd+gd*gd+bd*bd: the configuration becomes
[(2, 0, 0), (0, 253, 255), (255, 255, 2)]
while, by adding 2 global moves, the configuration becomes the expected one
[(0, 0, 0), (0, 255, 255), (255, 255, 0)]
The algorithm produces just one of several possible solutions. Unfortunately, since the metric used is not Euclidean, it's not possible to simply flip the 3 dimension in the 8 possible combinations (i.e., replace r→255-r and/or the same for g and/or b) to get equivalent solutions. It's probably best to introduce randomness in the order the color moving steps are tried out, and vary the random seed.
I have not corrected for the monitor's gamma, because its purpose is precisely that of altering the spacing of the brightness in order to compensate for the eyes' different sensitivity at high and low brightness. Of course, the screen gamma curve deviates from the ideal, and a (system dependent!) modification of the gamma would yield better results, but the standard gamma is a good starting point.
This is the output of the algorithm for 25 colors:
Note that the first 8 colors (the bottom row and the first 3 colors of the row above) are close to the corners of the RGB cube (they are not at the corners because of the non-Euclidean metric).
First let me ask, do you want to remain in sRGB, and go through each RGB combination?
OR (and this is my assumption) do you actually want colors that are "farthest" from each other? Since you used the term "distinct" I'm going to cover finding color differences.
Model Your Perceptions
sRGB is a colorspace that refers to your display/output. And while the gamma curve is "sorta" perceptually uniform, the overall sRGB colorspace is not, it is intended more model the display than human perception.
To determine "maximum distance" between colors in terms of perception, you want a model of perception, either using a colorspace that is perceptually uniform or using a color appearance model (CAM).
As you just want sRGB values as a result, then using a uniform colorspace is probably sufficient, such as CIELAB or CIELUV. As these use cartesian coordinates, the difference between two colors in (L*a*b*) is simply the euclidian distance.
If you want to work with polar coordinates (i.e. hue angle) then you can go one step past CIELAB, into CIELCh.
How To Do It
I suggest Bruce Lindbloom's site for the relevant math.
The simplified steps:
Linearize the sRGB by removing the gamma curve from each of the three color channels.
Convert the linearized values into CIE XYZ (use D65, no adaptation)
Convert XYZ into L* a* b*
Find the Opposite:
a. Now find the "opposite" color by plotting an line through 0, making the line equal in distance from both sides of zero. OR
b. ALT: Do one more transform from LAB into CIELCh, then find the opposite by rotating hue 180 degrees. Then convert back to LAB.
Convert LAB to XYZ.
Convert XYZ to sRGB.
Add the sRGB gamma curve to back to each channel.
Staying in sRGB?
If you are less concerned about perceptual uniformity, then you could just stay in sRGB, though the results will be less accurate. In this case all you need to do is take the difference of each channel relative to 255 (i.e. invert each channel):
What's The Difference of the Differences?
Here are some comparisons of the two methods discussed above:
For starting color #0C5490 sRGB Difference Method:
Same starting color, but using CIELAB L* C* h* (and just rotating hue 180 degrees, no adjustment to L*).
Starting color #DFE217, sRGB difference method:
Here in CIELAB LCh, just rotating hue 180:
And again in LCh, but this time also adjusting L* as (100 - L*firstcolor)
Now you'll notice a lot of change in hue angle on these — the truth is that while LAB is "somewhat uniform," it's pretty squirrly in the area of blue.
Take a look at the numbers:
They seem substantially different for hue, chroma, a, b... yet they create the same HEX color value! So yes, even CIELAB has inaccuracies (especially blue).
If you want more accuracy, try CIECAM02

Algorithm to make overly bright (HDR) colours become white?

You know how every colour eventually turns white in an image if it's bright enough or sufficiently over-exposed? I'm trying to figure out a function to do this to apply to generated HDR images, in a realistic and pleasing looking way (using idealised camera performance as a reference I guess).
The problem the algorithm/function I want to obtain should solve is, let's say you have an orange pixel with the (linear RGB) values {1.0, 0.2, 0.0}. Everything is fine if you multiply each value by a factor of 1.0 or less, but let's say you multiply that pixel by 6, now you get {6.0, 1.2, 0.0}, what do you do with your out of range red and green value of 6.0 and 1.2? You could clip them which would give you {1.0, 1.0, 0.0}, which sadly is what Photoshop and 3DS Max seem to do, but it looks so very wrong as now your formerly orange pixel is yellow (so if you start with any saturated hue (meaning at least one channel is 0.0) you always end up with either magenta, yellow or cyan) and it will never become white.
I considered taking half of the excess of one channel and splitting it equally between the other channels, so for example {1.6, 0.5, 0.1} would become {1.0, 0.8, 0.4} but it's too simplistic and not very realistic. I strongly doubt that an acceptable solution could be anywhere near this trivial.
I'm sure there must have been research done on the topic, but I cannot find any relevant literature and sensitometry doesn't seem to be quite what I'm looking for.
Modifying the Python code I left in an answer on another question to work in the range [0.0-1.0]:
def redistribute_rgb(r, g, b):
threshold = 1.0
m = max(r, g, b)
if m <= threshold:
return r, g, b
total = r + g + b
if total >= 3 * threshold:
return threshold, threshold, threshold
x = (3 * threshold - total) / (3 * m - total)
gray = threshold - x * m
return gray + x * r, gray + x * g, gray + x * b
This should return acceptable results in either a linear or gamma-corrected color space, although linear will be better.
Multiplying each r,g,b value by the same amount retains their original proportions and thus the hue, up to the point where x=0 and you've achieved white. You've expressed interest in a non-linear response once clipping starts, but I'm not entirely sure how to work that in. The math was carefully chosen so that at least one of the returned values will be at the threshold, and none will be above.
Running this on your example of (1.6, 0.5, 0.1) returns (1.0, 0.6615, 0.5385).
I've found a way to do it based on Mark Ransom's suggestion with a twist. When the colour is out of gamut we compute the grey colour of equivalent perceptual luminosity then we linearly interpolate between the out-of-gamut input colour and that grey value to find the first in-gamut colour. Weighting each RGB channel to get the perceptual luminosity part is the tricky part seeing as the most commonly used formula from CIELab L = 0.2126*red + 0.7152*green + 0.0722*blue is quite blatantly wrong as it makes the blue way too bright. Instead I did some tests and chose the weights which looked the most correct to me, though these are not definite and you might want to tweak them, although for this particular problem this is perhaps not too crucial.
Or in fewer words the solution is to desaturate the out-of-gamut colour just enough that it might be in-gamut.
Here is my solution in C code. All variables are in floating point format.
Wr=0.125; Wg=0.68; Wb=0.195; // these are the weights for each colour
max = MAXN(MAXN(red, grn), blu); // max is the maximum value of the 3 colours
if (max > 1.) // if the colour is out of gamut
{
L = Wr*red + Wg*grn + Wb*blu; // Luminosity of the colour's grey point
if (L < 1.) // if the grey point is no brighter than white
{
// t represents the ratio on the line between the input colour
// and its corresponding grey point. t is between 0 and 1,
// a lower t meaning closer to the grey point and a
// higher t meaning closer to the input colour
t = (1.-L) / (max-L);
// a simple linear interpolation between the
// input colour and its grey point
red = red*t + L*(1.-t);
grn = grn*t + L*(1.-t);
blu = blu*t + L*(1.-t);
}
else // if it's too bright regardless of saturation
{
red = grn = blu = 1.;
}
}
Here's what it looks like with a linear orange gradient:
It does not use anything like arbitrary gamma which is good, the only mostly arbitrary thing has to do with the Luminosity weights, but I guess those are quite necessary.
You have to map it to some non-linear scale. For example: http://en.wikipedia.org/wiki/Gamma_correction .
Ex: Let y = f(x) = log(1+x) - log(1-x) define the "actual" luminescence.
The reverse function is x = g(y) = (e^y-1)/(e^y+1).
now, you have values x=1 and x=0.2. For the first case the corresponding y is infinity. Six times the infinity is still infinity. If you use function g, you get new x_new = 1.
For x=0.2, y = 0.4054651. After multiplying by 6, y_new = 2.432791 . The corresponding x_new = 0.8385876.
For x=0, x_new will still be 0 (I will leave the calculations to you).
So starting from (1.0, 0.2, 0.0) your new set of points are (1.0, 0.8385876, 0.0).
This is one example of mapping function. There are infinite number of them. Choose one that looks best to you.

Image colorization algorithm

I have an image whose pixel colors I want to change to match a particular color (though not completely).
As an example, I want to tint the image of a red car so that it appears blue. I can do this with the GIMP and with ImageMagick, but I would like to know which algorithm they are using to do this so I can implement it in my own program.
I have tried to do this with simple addition of the difference between the colors but it doesn't work very well.
As just a shot in the dark, untested suggestion from someone who's getting into image processing fairly recently... maybe you could just scale the channels?
For example:
RGB_Pixel.r = RGB_Pixel.r * 0.75;
RGB_Pixel.g = RGB_Pixel.g * 0.75;
RGB_Pixel.b = RGB_Pixel.b * 1.25;
If you loop through your image pixel-by-pixel with those three changes, I'd expect you to see the image shift towards blue, and the numbers of course can be trial-and-error'd.
EDIT:
Now if you want to ONLY change the color of pixels that are a certain color to begin with, say, you want to turn a blue car red without doing anything to the rest of the picture, you'll need to run a check on each pixel to see what color it looks like. One way to do this is to use a Euclidean distance:
int* R = RGB_Pixel.r;
int* G = RGB_Pixel.g;
int* B = RGB_Pixel.b;
// You are looking for Blue, which is [0 0 255];
// this variable D is the distance of your current pixel from the desired color.
float D = sqrt( (R-0)*(R-0) + (G-0)*(G-0) + (B-255)*(B-255) );
if(D < threshold)
{
R = R * 0.75;
G = G * 0.75;
B = B * 1.25;
}
The threshold variable is a number between 1 and 255 that represents the maximum distance a color can be from the color you're looking for and still be considered "close enough". This is because you don't want to only look for [0 0 255], very rarely will you find perfect blue (or perfect anything) in an image.
You want to use the lowest threshold you can get away with so that you don't end up coloring other things that aren't part of the object you're looking for, but you want to make sure your threshold is high enough that it covers your entire image. One way to do this is to set up multiple D variables, each with a different target color, so you can capture a few separate types of "blue" without using a really high threshold. For instance, to the human eye, [102 102 200] looks like blue, but might require a pretty high threshold to catch if [0 0 255] is your target color.
I suggest playing with this calculator to get a feel for which colors you want to search for specifically.

Detection of coins (and fit ellipses) on an image

I am currently working on a project where I am trying to detect a few coins lying on a flat surface (i.e. a desk). The coins do not overlap and are not hidden by other objects. But there might be other objects visible and the lighting conditions may not be perfect... Basically, consider yourself filming your desk which has some coins on it.
So each point should be visible as an Ellipse. Since I don't know the position of the camera the shape of the ellipses may vary, from a circle (view from top) to flat ellipses depending on the angle the coins are filmed from.
My problem is that I am not sure how to extract the coins and finally fit ellipses over them (which I am looking for to do further calculations).
For now, I have just made the first attempt by setting a threshold value in OpenCV, using findContours() to get the contour lines and fitting an ellipse. Unfortunately, the contour lines only rarely give me the shape of the coins (reflections, bad lighting, ...) and this way is also not preferred since I don't want the user to set any threshold.
Another idea was to use a template matching method of an ellipse on that image, but since I don't know the angle of the camera nor the size of the ellipses I don't think this would work well...
So I wanted to ask if anybody could tell me a method that would work in my case.
Is there a fast way to extract the three coins from the image? The calculations should be made in realtime on mobile devices and the method should not be too sensitive for different or changing lights or the color of the background.
Would be great if anybody could give me any tips on which method could work for me.
Here's some C99 source implementing the traditional approach (based on OpenCV doco):
#include "cv.h"
#include "highgui.h"
#include <stdio.h>
#ifndef M_PI
#define M_PI 3.14159265358979323846
#endif
//
// We need this to be high enough to get rid of things that are too small too
// have a definite shape. Otherwise, they will end up as ellipse false positives.
//
#define MIN_AREA 100.00
//
// One way to tell if an object is an ellipse is to look at the relationship
// of its area to its dimensions. If its actual occupied area can be estimated
// using the well-known area formula Area = PI*A*B, then it has a good chance of
// being an ellipse.
//
// This value is the maximum permissible error between actual and estimated area.
//
#define MAX_TOL 100.00
int main( int argc, char** argv )
{
IplImage* src;
// the first command line parameter must be file name of binary (black-n-white) image
if( argc == 2 && (src=cvLoadImage(argv[1], 0))!= 0)
{
IplImage* dst = cvCreateImage( cvGetSize(src), 8, 3 );
CvMemStorage* storage = cvCreateMemStorage(0);
CvSeq* contour = 0;
cvThreshold( src, src, 1, 255, CV_THRESH_BINARY );
//
// Invert the image such that white is foreground, black is background.
// Dilate to get rid of noise.
//
cvXorS(src, cvScalar(255, 0, 0, 0), src, NULL);
cvDilate(src, src, NULL, 2);
cvFindContours( src, storage, &contour, sizeof(CvContour), CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE, cvPoint(0,0));
cvZero( dst );
for( ; contour != 0; contour = contour->h_next )
{
double actual_area = fabs(cvContourArea(contour, CV_WHOLE_SEQ, 0));
if (actual_area < MIN_AREA)
continue;
//
// FIXME:
// Assuming the axes of the ellipse are vertical/perpendicular.
//
CvRect rect = ((CvContour *)contour)->rect;
int A = rect.width / 2;
int B = rect.height / 2;
double estimated_area = M_PI * A * B;
double error = fabs(actual_area - estimated_area);
if (error > MAX_TOL)
continue;
printf
(
"center x: %d y: %d A: %d B: %d\n",
rect.x + A,
rect.y + B,
A,
B
);
CvScalar color = CV_RGB( rand() % 255, rand() % 255, rand() % 255 );
cvDrawContours( dst, contour, color, color, -1, CV_FILLED, 8, cvPoint(0,0));
}
cvSaveImage("coins.png", dst, 0);
}
}
Given the binary image that Carnieri provided, this is the output:
./opencv-contour.out coin-ohtsu.pbm
center x: 291 y: 328 A: 54 B: 42
center x: 286 y: 225 A: 46 B: 32
center x: 471 y: 221 A: 48 B: 33
center x: 140 y: 210 A: 42 B: 28
center x: 419 y: 116 A: 32 B: 19
And this is the output image:
What you could improve on:
Handle different ellipse orientations (currently, I assume the axes are perpendicular/horizontal). This would not be hard to do using image moments.
Check for object convexity (have a look at cvConvexityDefects)
Your best way of distinguishing coins from other objects is probably going to be by shape. I can't think of any other low-level image features (color is obviously out). So, I can think of two approaches:
Traditional object detection
Your first task is to separate the objects (coins and non-coins) from the background. Ohtsu's method, as suggested by Carnieri, will work well here. You seem to worry about the images being bipartite but I don't think this will be a problem. As long as there is a significant amount of desk visible, you're guaranteed to have one peak in your histogram. And as long as there are a couple of visually distinguishable objects on the desk, you are guaranteed your second peak.
Dilate your binary image a couple of times to get rid of noise left by thresholding. The coins are relatively big so they should survive this morphological operation.
Group the white pixels into objects using region growing -- just iteratively connect adjacent foreground pixels. At the end of this operation you will have a list of disjoint objects, and you will know which pixels each object occupies.
From this information, you will know the width and the height of the object (from the previous step). So, now you can estimate the size of the ellipse that would surround the object, and then see how well this particular object matches the ellipse. It may be easier just to use width vs height ratio.
Alternatively, you can then use moments to determine the shape of the object in a more precise way.
I don't know what the best method for your problem is. About thresholding specifically, however, you can use Otsu's method, which automatically finds the optimal threshold value based on an analysis of the image histogram. Use OpenCV's threshold method with the parameter ThresholdType equal to THRESH_OTSU.
Be aware, though, that Otsu's method work well only in images with bimodal histograms (for instance, images with bright objects on a dark background).
You've probably seen this, but there is also a method for fitting an ellipse around a set of 2D points (for instance, a connected component).
EDIT: Otsu's method applied to a sample image:
Grayscale image:
Result of applying Otsu's method:
If anyone else comes along with this problem in the future as I did, but using C++:
Once you have used findContours to find the contours (as in Misha's answer above), you can easily fit ellipses using fitEllipse, eg
vector<vector<Point> > contours;
findContours(img, contours, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0,0));
RotatedRect rotRecs[contours.size()];
for (int i = 0; i < contours.size(); i++) {
rotRecs[i] = fitEllipse(contours[i]);
}

Constructing colours for maximum contrast

I want to draw some items on screen, each item is in one of N sets. The number of sets changes all the time, so I need to calculate N different colours which are as different as possible (to make it easy to identify what is in which set).
So, for example with N = 2 my results would be black and white. With three I guess I would get all red, all green, all blue. For all four, it's less obvious what the correct answer is, and this is where I'm having trouble.
EDIT:: The obvious approach is to map 0 to Red, 1 to green, and all the colours in between to the appropriate rainbow colours, then you can get a colour for set N by doing GetRainbowColour(N / TotalSets), so a GetRainbowColour method is all that's need to solve this problem
You can read up on the HSL and HSV color models in this wikipedia article. The "H" in the acronymns stands for Hue, and that is the rainbow you want. It sounds like you want saturation to max out. The article also explains how to convert to RGB color.
Looks like a similar question has been asked before here.
The answer to this question is subjective - what is best contrast to someone with full vision is not necessarily best contrast to someone who is colour blind or has limited vision or someone with normal eyesight who is operating the equipment in a dark environment.
Physiologically, humans have much better resolution for intensity that for hue or saturation. That is why analogue TV, digital video and photo compression throw away colour information to reduce bandwidth (4:2:2) - if you put two pixels which are different intensities together, it doesn't matter what colour they are - we simply can only sense colour differences on large areas of like intensity.
So if the thing you are trying to display has lots of small areas of only a few pixels, or you want it to be used in the dark (in the dark the brain boosts the blue cells and we don't see colour as well) or by the 10% of the male population who are colour blind, consider using intensity as the main differentiating factor rather than hue. GetRainbowColour would ignore the most important dimension of the human visual sense, but can work quite well for continuous data.
Thanks, brainjam, for the suggestion to use HSL. I whipped up this little function that seems to work nicely for stacked graphs:
var contrastingColors = function(len) {
var result = [];
if (len > 0) {
var h = [0, 180]; // red, cyan
var shft = 360 / len;
var flip = 0;
var l = 50;
for (var ix = 0; ix < len; ix++) {
result.push("hsl(" + h[flip] + ",100%," + l + "%)");
h[flip] = (h[flip] + shft) % 360;
flip = flip ? 0 : 1;
if (flip == 0) {
l = (l == 50) ? 30 : (l == 30? 70 : 50);
}
}
}
return result;
};

Resources