HDR color-space transformations result in negative RGB values (Yxy to XYZ to sRGB) - matrix

I'm currently adding HDR to an old engine and stumbled on a color-space transformation problem.
I'm defining my lights in the Yxy color space
Then I'm converting Yxy to XYZ
XYZ to sRGB transformation.
Using RGB values > 1.0 when rendering and normalizing the result with tone mapping in the end.
I'm working with rather huge numbers since the main light source is the sun having up to 150k Lux Illuminance.
YxyToXYZ function
osg::Vec3 YxyToXYZ( const osg::Vec3& Yxy )
{
if ( Yxy[2] > 0.0f )
{
return osg::Vec3( Yxy[0] * Yxy[1] / Yxy[2] , Yxy[0] , Yxy[0] * ( 1.0f - Yxy[1] - Yxy[2]) / Yxy[2] );
}
else
{
return osg::Vec3( 0.0f , 0.0f , 0.0f );
}
}
XYZtosRGB
osg::Vec3 XYZToSpectralRGB( const osg::Vec3& XYZ )
{
// Wikipedia matrix
osg::Vec3 rgb;
rgb[0] = 3.240479 * XYZ[0] - 1.537150 * XYZ[1] - 0.498535 * XYZ[2];
rgb[1] = -0.969256 * XYZ[0] + 1.875992 * XYZ[1] + 0.041556 * XYZ[2];
rgb[2] = 0.055648 * XYZ[0] - 0.204043 * XYZ[1] + 1.057311 * XYZ[2];
std::cout << "newrgb rgb r:" << rgb[0] << " g:" << rgb[1] << " b:" << rgb[2] << std::endl;
// The matrix in pbrt book p. 235 gives unexpected results. We expect that if we have
// x = y = 0.33333 we get a white pixel but that matrix gives red. Hence we use a different
// matrix that is often used by 3D people
rgb[0] = 2.5651 * XYZ[0] -1.1665 * XYZ[1] -0.3986 * XYZ[2];
rgb[1] = -1.0217 * XYZ[0] + 1.9777 * XYZ[1] + 0.0439 * XYZ[2];
rgb[2] = 0.0753 * XYZ[0] -0.2543 * XYZ[1] + 1.1892 * XYZ[2];
std::cout << "oldrgb rgb r:" << rgb[0] << " g:" << rgb[1] << " b:" << rgb[2] << std::endl;
return rgb;
}
Test samples:
Yxy Y:1 x:1 y:1
XYZ X:1 Y:1 Z:-1
newrgb rgb r:2.20186 g:0.86518 b:-1.20571
oldrgb rgb r:1.7972 g:0.9121 b:-1.3682
Yxy Y:25 x:0.26 y:0.28
XYZ X:23.2143 Y:25 Z:41.0714
newrgb rgb r:16.3211 g:26.106 b:39.616
oldrgb rgb r:14.0134 g:27.5275 b:44.2327
Yxy Y:3100 x:0.27 y:0.29
XYZ X:2886.21 Y:3100 Z:4703.45
newrgb rgb r:2242.7 g:3213.56 b:4501.09
oldrgb rgb r:1912.47 g:3388.51 b:5022.34
Yxy Y:6e+06 x:0.33 y:0.33
XYZ X:6e+06 Y:6e+06 Z:6.18182e+06
newrgb rgb r:7.13812e+06 g:5.69731e+06 b:5.64573e+06
oldrgb rgb r:5.92753e+06 g:6.00738e+06 b:6.27742e+06
Question:
I suppose negative values should be just clamped away? Or do i have a error in my calculations?
The 2 matrices produce similar but different values (primaries very close to sRGB). I would like to replace the old matrix (which I have no idea where it's coming from) with the Wiki one. Does anyone know where the old matrix is coming from and which would be the correct one?
I found a partial answer # Yxy to RGB conversion which sounds like from a former colleague :)... but doesn't solve my problems.
Many Thanks

Your newrgb rgb computations are correct, I get the same output using Colour:
import colour
xyY = (1.0, 1.0, 1.0)
XYZ = colour.xyY_to_XYZ(xyY)
print(colour.XYZ_to_sRGB(XYZ, apply_encoding_cctf=False))
xyY = (0.26, 0.28, 25.0)
XYZ = colour.xyY_to_XYZ(xyY)
print(colour.XYZ_to_sRGB(XYZ, apply_encoding_cctf=False))
xyY = (0.27, 0.29, 3100.0)
XYZ = colour.xyY_to_XYZ(xyY)
print(colour.XYZ_to_sRGB(XYZ, apply_encoding_cctf=False))
xyY = (0.33, 0.33, 6e+06)
XYZ = colour.xyY_to_XYZ(xyY)
print(colour.XYZ_to_sRGB(XYZ, apply_encoding_cctf=False))
# [ 2.2020461 0.86530782 -1.20530687]
# [ 16.31921754 26.10605109 39.60507773]
# [ 2242.49706509 3213.58480355 4499.85141917]
# [ 7138073.69325106 5697605.86069197 5644291.15301836]
You get negative values in the first conversion because your xy chromaticity coordinates are outside the spectral locus, thus they represent imaginary colours.
The Wikipedia matrix is correct and is the one from IEC 61966-2-1:1999 which is the official standard for sRGB colourspace.

Related

RAW 12 bits per pixel data format

I was analyzing a 12 bit per pixel, GRBG, Little Endian, 1920x1280 resolution raw image but I am confused how data or RGB pixels are stored. Image size is 4915200 bytes, when calculated 4915200/(1920x1280) = 2. That means each pixel takes 2 bytes and 4 bits in 2bytes are used for padding. I tried to edit image with Hex editor but I have no idea how pixels are stored in image. Please do share if you have any idea.
Image Link
That means each pixel takes 2 bytes and 4 bits in 2bytes are used for padding
Well, sort of. It means each sample is stored in two consecutive bytes, with 4 bits of padding. But in raw images, samples usually aren't pixels, not exactly. Raw images have not been demosaiced yet, they are raw after all. For GRGB, the Bayer pattern looks like this:
What's in the file, is a 1920x1280 grid of 12+4 bit samples, arranged in the same order as pixels would have been, but each sample has only one channel, namely the one that corresponds to its position in the Bayer pattern.
Additionally, the color space is probably linear, not Gamma-compressed. The color balance is unknown unless you reverse engineer it. A proper decoder would have a calibrated color matrix, but I don't have that.
I combined these two things and guessed a color balance to do a really basic decoding (with bad demosaicing, just to demonstrate that the above information is probably accurate):
Using this C# code:
Bitmap bm = new Bitmap(1920, 1280);
for (int y = 0; y < 1280; y += 2)
{
int i = y * 1920 * 2;
for (int x = 0; x < 1920; x += 2)
{
const int stride = 1920 * 2;
int d0 = data[i] + (data[i + 1] << 8);
int d1 = data[i + 2] + (data[i + 3] << 8);
int d2 = data[i + stride] + (data[i + stride + 1] << 8);
int d3 = data[i + stride + 2] + (data[i + stride + 3] << 8);
i += 4;
int r = Math.Min((int)(Math.Sqrt(d1) * 4.5), 255);
int b = Math.Min((int)(Math.Sqrt(d2) * 9), 255);
int g0 = Math.Min((int)(Math.Sqrt(d0) * 5), 255);
int g3 = Math.Min((int)(Math.Sqrt(d3) * 5), 255);
int g1 = Math.Min((int)(Math.Sqrt((d0 + d3) * 0.5) * 5), 255);
bm.SetPixel(x, y, Color.FromArgb(r, g0, b));
bm.SetPixel(x + 1, y, Color.FromArgb(r, g1, b));
bm.SetPixel(x, y + 1, Color.FromArgb(r, g1, b));
bm.SetPixel(x + 1, y + 1, Color.FromArgb(r, g3, b));
}
}
You can load your image into a Numpy array and reshape correctly like this:
import numpy as np
# Load image and reshape
img = np.fromfile('Image_12bpp_grbg_LittleEndian_1920x1280.raw',dtype=np.uint16).reshape((1280,1920))
print(img.shape)
(1280, 1920)
Then you can demosaic and scale to get a 16-bit PNG. Note that I don't know your calibration coefficients so I guessed:
#!/usr/bin/env python3
# Demosaicing Bayer Raw image
# https://stackoverflow.com/a/68823014/2836621
import cv2
import numpy as np
filename = 'Image_12bpp_grbg_LittleEndian_1920x1280.raw'
# Set width and height
w, h = 1920, 1280
# Read mosaiced image as GRGRGR...
# BGBGBG...
bayer = np.fromfile(filename, dtype=np.uint16).reshape((h,w))
# Extract g0, g1, b, r from mosaic
g0 = bayer[0::2, 0::2] # every second pixel down and across starting at 0,0
g1 = bayer[1::2, 1::2] # every second pixel down and across starting at 1,1
r = bayer[0::2, 1::2] # every second pixel down and across starting at 0,1
b = bayer[1::2, 0::2] # every second pixel down and across starting at 1,0
# Apply (guessed) color matrix for 16-bit PNG
R = np.sqrt(r) * 1200
B = np.sqrt(b) * 2300
G = np.sqrt((g0+g1)/2) * 1300 # very crude
# Stack into 3 channel
BGR16 = np.dstack((B,G,R)).astype(np.uint16)
# Save result as 16-bit PNG
cv2.imwrite('result.png', BGR16)
Keywords: Python, raw, image processing, Bayer, de-Bayer, mosaic, demosaic, de-mosaic, GBRG, 12-bit.

GLSL uv lookup and precision with FBO / RenderTarget in Three.js

My application is coded in Javascript + Three.js / WebGL + GLSL. I have 200 curves, each one made of 85 points. To animate the curves I add a new point and remove the last.
So I made a positions shader that stores the new positions onto a texture (1) and the lines shader that writes the positions for all curves on another texture (2).
The goal is to use textures as arrays: I know the first and last index of a line, so I need to convert those indices to uv coordinates.
I use FBOHelper to debug FBOs.
1) This 1D texture contains the new points for each curve (200 in total): positionTexture
2) And these are the 200 curves, with all their points, one after the other: linesTexture
The black parts are the BUG here. Those texels shouldn't be black.
How does it work: at each frame the shader looks up the new point for each line in the positionTexture and updates the linesTextures accordingly, with a for loop like this:
#define LINES_COUNT = 200
#define LINE_POINTS = 85 // with 100 it works!!!
// Then in main()
vec2 uv = gl_FragCoord.xy / resolution.xy;
for (float i = 0.0; i < LINES_COUNT; i += 1.0) {
float startIdx = i * LINE_POINTS; // line start index
float endIdx = beginIdx + LINE_POINTS - 1.0; // line end index
vec2 lastCell = getUVfromIndex(endIdx); // last uv coordinate reserved for current line
if (match(lastCell, uv)) {
pos = texture2D( positionTexture, vec2((i / LINES_COUNT) + minFloat, 0.0)).xyz;
} else if (index >= startIdx && index < endIdx) {
pos = texture2D( lineTexture, getNextUV(uv) ).xyz;
}
}
This works, but it's slightly buggy when I have many lines (150+): likely a precision problem. I'm not sure if the functions I wrote to look up the textures are right. I wrote functions like getNextUV(uv) to get the value from the next index (converted to uv coordinates) and copy to the previous. Or match(xy, uv) to know if the current fragment is the texel I want.
I though I could simply use the classic formula:
index = uv.y * width + uv.x
But it's more complicated than that. For example match():
// Wether a point XY is within a UV coordinate
float size = 132.0; // width and height of texture
float unit = 1.0 / size;
float minFloat = unit / size;
bool match(vec2 point, vec2 uv) {
vec2 p = point;
float x = floor(p.x / unit) * unit;
float y = floor(p.y / unit) * unit;
return x <= uv.x && x + unit > uv.x && y <= uv.y && y + unit > uv.y;
}
Or getUVfromIndex():
vec2 getUVfromIndex(float index) {
float row = floor(index / size); // Example: 83.56 / 10 = 8
float col = index - (row * size); // Example: 83.56 - (8 * 10) = 3.56
col = col / size + minFloat; // u = 0.357
row = row / size + minFloat; // v = 0.81
return vec2(col, row);
}
Can someone explain what's the most efficient way to lookup values in a texture, by getting a uv coordinate from index value?
Texture coordinates go from the edge of pixels not the centers so your formula to compute a UV coordinates needs to be
u = (xPixelCoord + .5) / widthOfTextureInPixels;
v = (yPixelCoord + .5) / heightOfTextureInPixels;
So I'm guessing you want getUVfromIndex to be
uniform vec2 sizeOfTexture; // allow texture to be any size
vec2 getUVfromIndex(float index) {
float widthOfTexture = sizeOfTexture.x;
float col = mod(index, widthOfTexture);
float row = floor(index / widthOfTexture);
return (vec2(col, row) + .5) / sizeOfTexture;
}
Or, based on some other experience with math issues in shaders you might need to fudge index
uniform vec2 sizeOfTexture; // allow texture to be any size
vec2 getUVfromIndex(float index) {
float fudgedIndex = index + 0.1;
float widthOfTexture = sizeOfTexture.x;
float col = mod(fudgedIndex, widthOfTexture);
float row = floor(fudgedIndex / widthOfTexture);
return (vec2(col, row) + .5) / sizeOfTexture;
}
If you're in WebGL2 you can use texelFetch which takes integer pixel coordinates to get a value from a texture

Depth / Ranged Image to Point Cloud

I am trying to create a pointcloud from a depth image / ranged image / displacement map (grey values which correspond to z depth).
Here is the depth image of an trailer vehicle:
http://de.tinypic.com/r/rlvp80/8
Camera Parameters, no rotation: x,y,z: 0,25,0
I get this pointcloud with a aslope ground plane, which should be even (just a quad at 0,0,0 with no rotation):
rotated:
http://de.tinypic.com/r/2mnglj6/8
Here is my code, I try to do this:
normalizedCameraRay = normalize(CameraRay);
Point_in_3D = zDepthValueOfPixelXY /normalizedCameraRay.zValue; //zValue is <= 1
This doesnt seem to work. If I use zDepthValueOfPixelXY * normalizedCameraRay, I got Radial Disortion (everything looks a bit like a sphere).
Any Ideas how to fix this? I attached my code.
Thank you for your ideas
float startPositionX = -((float)image.rows)/2;
float startPositionY = -((float)image.cols)/2;
glm::vec3 cameraWorldPosition(x,y,z);
qDebug() << x << "," << y << ", " << z;
for(int a = 0; a < image.rows;a++)
{
for(int b = 0; b < image.cols;b++)
{
float movementX = startPositionX+b;
float movementY = startPositionY+a;
glm::vec3 ray(movementX, movementY, focal_length);
ray = glm::normalize(ray);
float rayFactor = ((float)image.at<u_int16_t>(a,b)) / ray[2];
glm::vec3 completeRay = rayFactor*ray;
pointCloud[a][b] = completeRay;
/*
ray = completeRay - cameraWorldPosition;
ray = glm::normalize(ray);
rayFactor = ((float)image.at<u_int16_t>(a,b)) / ray[2];
ray = rayFactor * ray;
pointCloud[a][b] = ray;*/
/*
ray = glm::vec3(startPositionX+b,startPositionY-a, image.at<u_int16_t>(a,b));
pointCloud[a][b] = ray;
*/
}
}

Calculating the correct texture coordinates for a 3D terrain

Using a texture, I'm trying to pass data to my shader so it knows what color each fragment should be. I'm attempting to create a voxel-type terrain (Minecraft style voxels) using 8-bit ints, with each RGBA value being a different color specified on the shader. The value 1 might be green and 2 might be brown for example.
If my math is correct, a 2048 x 2048 sized texture is the exact size needed for the voxel terrain data:
2048 x 2048 sized texture = 4194304 pixels.
8 x 8 = 64 "chunks" loaded at once.
32 x 32 x 256 = 262144 voxels in a chunk.
64 x 262144 = 16777216 voxels.
For each pixel in the texture I can use RGBA as individual values, so divide it by 4: (Each voxel is therefore 1 byte which is fine as values will be less than 200.)
16777216 / 4 = 4194304 pixels.
That said, I'm having trouble getting the correct texture coordinates to represent the 3D terrain. This is my code at the moment which works fine for a flat plane:
Fragment shader:
int modint( int a, int b )
{
return a - int( floor( float( a ) / float( b ) ) * float( b ) );
}
void main() {
// divide by 4096 because we're using the same pixel twice in each axis
vec4 data = texture2D(uSampler, vec2(verpos.x / 4096.0, verpos.z / 4096.0));
vec2 pixel_of_target = verpos.xz;
int _x = int( pixel_of_target.x );
int _y = int( pixel_of_target.y );
int X = modint( _y, 2 ) * 2 + modint( _x, 2 );
vec4 colorthing;
float blockID;
if (X == 0) blockID = data.x;
else if (X == 1) blockID = data.y;
else if (X == 2) blockID = data.z;
else if (X == 3) blockID = data.w;
if (blockID == 1.0) gl_FragColor = vec4( 1.0, 0.0, 0.0, 1.0 );
else if (blockID == 2.0) gl_FragColor = vec4( 0.0, 1.0, 0.0, 1.0 );
else if (blockID == 3.0) gl_FragColor = vec4( 0.0, 0.0, 1.0, 1.0 );
else if (blockID == 4.0) gl_FragColor = vec4( 1.0, 0.0, 1.0, 1.0 );
}
So basically my texture is a 2D map containing slices of my 3D data, and I need to modify this code so it calculates the correct coordinates. Anyone know how I would do this?
Assuming that your texture will contain 64 slices of your terrain in a 8x8 grid, the following lookup should work:
vec2 texCoord = vec2((verpos.x / 8.0 + mod(verpos.y, 8.0)) / 4096.0,
(verpos.x / 8.0 + floor(verpos.y / 8.0)) / 4096.0));
vec4 data = texture2D(uSampler, texCoord);
... Rest of your shader as above
Your texture then should contain a full 2D slice of the terrain at height 0, then next to it a slice at height 1, until height 7 at the rightmost position. In the next row are heights 8 - 15 and so on.
Having said that, you should normally try to avoid ifs in shader code, because it slows down shader processing quite a bit. If webGL supports arrays (which it does to my knowledge), you can store all your colors in an array and do an array-lookup instead of the if-chain
the texture coordinate should be
vec4 data = texture2D(uSampler, vec2(verpos.x / (4 * 2048.0), verpos.z / 2048.0));
and then the byte to read would be given by
int index = (verpos.x/2048) % 4;
if index == 0 pick data.r, if 1 data.g, if 2 data.b and so on...

Smooth spectrum for Mandelbrot Set rendering

I'm currently writing a program to generate really enormous (65536x65536 pixels and above) Mandelbrot images, and I'd like to devise a spectrum and coloring scheme that does them justice. The wikipedia featured mandelbrot image seems like an excellent example, especially how the palette remains varied at all zoom levels of the sequence. I'm not sure if it's rotating the palette or doing some other trick to achieve this, though.
I'm familiar with the smooth coloring algorithm for the mandelbrot set, so I can avoid banding, but I still need a way to assign colors to output values from this algorithm.
The images I'm generating are pyramidal (eg, a series of images, each of which has half the dimensions of the previous one), so I can use a rotating palette of some sort, as long as the change in the palette between subsequent zoom levels isn't too obvious.
This is the smooth color algorithm:
Lets say you start with the complex number z0 and iterate n times until it escapes. Let the end point be zn.
A smooth value would be
nsmooth := n + 1 - Math.log(Math.log(zn.abs()))/Math.log(2)
This only works for mandelbrot, if you want to compute a smooth function for julia sets, then use
Complex z = new Complex(x,y);
double smoothcolor = Math.exp(-z.abs());
for(i=0;i<max_iter && z.abs() < 30;i++) {
z = f(z);
smoothcolor += Math.exp(-z.abs());
}
Then smoothcolor is in the interval (0,max_iter).
Divide smoothcolor with max_iter to get a value between 0 and 1.
To get a smooth color from the value:
This can be called, for example (in Java):
Color.HSBtoRGB(0.95f + 10 * smoothcolor ,0.6f,1.0f);
since the first value in HSB color parameters is used to define the color from the color circle.
Use the smooth coloring algorithm to calculate all of the values within the viewport, then map your palette from the lowest to highest value. Thus, as you zoom in and the higher values are no longer visible, the palette will scale down as well. With the same constants for n and B you will end up with a range of 0.0 to 1.0 for a fully zoomed out set, but at deeper zooms the dynamic range will shrink, to say 0.0 to 0.1 at 200% zoom, 0.0 to 0.0001 at 20000% zoom, etc.
Here is a typical inner loop for a naive Mandelbrot generator. To get a smooth colour you want to pass in the real and complex "lengths" and the iteration you bailed out at. I've included the Mandelbrot code so you can see which vars to use to calculate the colour.
for (ix = 0; ix < panelMain.Width; ix++)
{
cx = cxMin + (double )ix * pixelWidth;
// init this go
zx = 0.0;
zy = 0.0;
zx2 = 0.0;
zy2 = 0.0;
for (i = 0; i < iterationMax && ((zx2 + zy2) < er2); i++)
{
zy = zx * zy * 2.0 + cy;
zx = zx2 - zy2 + cx;
zx2 = zx * zx;
zy2 = zy * zy;
}
if (i == iterationMax)
{
// interior, part of set, black
// set colour to black
g.FillRectangle(sbBlack, ix, iy, 1, 1);
}
else
{
// outside, set colour proportional to time/distance it took to converge
// set colour not black
SolidBrush sbNeato = new SolidBrush(MapColor(i, zx2, zy2));
g.FillRectangle(sbNeato, ix, iy, 1, 1);
}
and MapColor below: (see this link to get the ColorFromHSV function)
private Color MapColor(int i, double r, double c)
{
double di=(double )i;
double zn;
double hue;
zn = Math.Sqrt(r + c);
hue = di + 1.0 - Math.Log(Math.Log(Math.Abs(zn))) / Math.Log(2.0); // 2 is escape radius
hue = 0.95 + 20.0 * hue; // adjust to make it prettier
// the hsv function expects values from 0 to 360
while (hue > 360.0)
hue -= 360.0;
while (hue < 0.0)
hue += 360.0;
return ColorFromHSV(hue, 0.8, 1.0);
}
MapColour is "smoothing" the bailout values from 0 to 1 which then can be used to map a colour without horrible banding. Playing with MapColour and/or the hsv function lets you alter what colours are used.
Seems simple to do by trial and error. Assume you can define HSV1 and HSV2 (hue, saturation, value) of the endpoint colors you wish to use (black and white; blue and yellow; dark red and light green; etc.), and assume you have an algorithm to assign a value P between 0.0 and 1.0 to each of your pixels. Then that pixel's color becomes
(H2 - H1) * P + H1 = HP
(S2 - S1) * P + S1 = SP
(V2 - V1) * P + V1 = VP
With that done, just observe the results and see how you like them. If the algorithm to assign P is continuous, then the gradient should be smooth as well.
My eventual solution was to create a nice looking (and fairly large) palette and store it as a constant array in the source, then interpolate between indexes in it using the smooth coloring algorithm. The palette wraps (and is designed to be continuous), but this doesn't appear to matter much.
What's going on with the color mapping in that image is that it's using a 'log transfer function' on the index (according to documentation). How exactly it's doing it I still haven't figured out yet. The program that produced it uses a palette of 400 colors, so index ranges [0,399), wrapping around if needed. I've managed to get pretty close to matching it's behavior. I use an index range of [0,1) and map it like so:
double value = Math.log(0.021 * (iteration + delta + 60)) + 0.72;
value = value - Math.floor(value);
It's kind of odd that I have to use these special constants in there to get my results to match, since I doubt they do any of that. But whatever works in the end, right?
here you can find a version with javascript
usage :
var rgbcol = [] ;
var rgbcol = MapColor ( Iteration , Zy2,Zx2 ) ;
point ( ctx , iX, iY ,rgbcol[0],rgbcol[1],rgbcol[2] );
function
/*
* The Mandelbrot Set, in HTML5 canvas and javascript.
* https://github.com/cslarsen/mandelbrot-js
*
* Copyright (C) 2012 Christian Stigen Larsen
*/
/*
* Convert hue-saturation-value/luminosity to RGB.
*
* Input ranges:
* H = [0, 360] (integer degrees)
* S = [0.0, 1.0] (float)
* V = [0.0, 1.0] (float)
*/
function hsv_to_rgb(h, s, v)
{
if ( v > 1.0 ) v = 1.0;
var hp = h/60.0;
var c = v * s;
var x = c*(1 - Math.abs((hp % 2) - 1));
var rgb = [0,0,0];
if ( 0<=hp && hp<1 ) rgb = [c, x, 0];
if ( 1<=hp && hp<2 ) rgb = [x, c, 0];
if ( 2<=hp && hp<3 ) rgb = [0, c, x];
if ( 3<=hp && hp<4 ) rgb = [0, x, c];
if ( 4<=hp && hp<5 ) rgb = [x, 0, c];
if ( 5<=hp && hp<6 ) rgb = [c, 0, x];
var m = v - c;
rgb[0] += m;
rgb[1] += m;
rgb[2] += m;
rgb[0] *= 255;
rgb[1] *= 255;
rgb[2] *= 255;
rgb[0] = parseInt ( rgb[0] );
rgb[1] = parseInt ( rgb[1] );
rgb[2] = parseInt ( rgb[2] );
return rgb;
}
// http://stackoverflow.com/questions/369438/smooth-spectrum-for-mandelbrot-set-rendering
// alex russel : http://stackoverflow.com/users/2146829/alex-russell
function MapColor(i,r,c)
{
var di= i;
var zn;
var hue;
zn = Math.sqrt(r + c);
hue = di + 1.0 - Math.log(Math.log(Math.abs(zn))) / Math.log(2.0); // 2 is escape radius
hue = 0.95 + 20.0 * hue; // adjust to make it prettier
// the hsv function expects values from 0 to 360
while (hue > 360.0)
hue -= 360.0;
while (hue < 0.0)
hue += 360.0;
return hsv_to_rgb(hue, 0.8, 1.0);
}

Resources