Greenish image with BGRA to YUV444 conversion using DirectX11 pixel shader - directx-11

I'm new to HLSL. I am trying to convert color space of an image captured using DXGI Desktop Duplication API from BGRA to YUV444 using texture as render target.
I have set my pixel shader to perform the required transformation. And taking the 4:2:0 sub-sampled YUV from render target texture and encoding it as H264 using ffmpeg, I can see the image.
The problem is - it is greenish.
The input color information for the shader is of float data type but the coefficient matrix available for RGB to YUV conversion assumes integer color information.
If I use clamp function and take the integers out of input color, I'm losing the accuracy.
Any suggestions and directions are welcome. Please let me know if any other information helps.
I suspect the Pixel shader I wrote, As I am working with it for the first time. Here is the pixel shader.
float3 rgb_to_yuv(float3 RGB)
{
float y = dot(RGB, float3(0.29900f, -0.16874f, 0.50000f));
float u = dot(RGB, float3(0.58700f, -0.33126f, -0.41869f));
float v = dot(RGB, float3(0.11400f, 0.50000f, -0.08131f));
return float3(y, u, v);
}
float4 PS(PS_INPUT input) : SV_Target
{
float4 rgba, yuva;
rgba = tx.Sample(samLinear, input.Tex);
float3 ctr = float3(0, 0, .5f);
return float4(rgb_to_yuv(rgba.rgb) + ctr, rgba.a);
}
The render target is mapped to CPU readable texture and copying the YUV444 data into 3 BYTE arrays and supplying to ffmpeg libx264 encoder.
The encoder writes the encoded packets to a video file.
Here I'm taking for each 2X2 matrix of pixels one U(Cb) and one V(Cr) and 4 Y values.
I retrieve the yuv420 data from texture as :
for (size_t h = 0, uvH = 0; h < desc.Height; ++h)
{
for (size_t w = 0, uvW = 0; w < desc.Width; ++w)
{
dist = resource1.RowPitch *h + w * 4;
distance = resource.RowPitch *h + w * 4;
distance2 = inframe->linesize[0] * h + w;
data = sptr[distance + 2 ];
pY[distance2] = data;
if (w % 2 == 0 && h % 2 == 0)
{
data1 = sptr[distance + 1];
distance2 = inframe->linesize[1] * uvH + uvW++;
pU[distance2] = data1;
data1 = sptr[distance ];
pV[distance2] = data1;
}
}
if (h % 2)
uvH++;
}
EDIT1: Adding the Blend state desc :
D3D11_BLEND_DESC BlendStateDesc;
BlendStateDesc.AlphaToCoverageEnable = FALSE;
BlendStateDesc.IndependentBlendEnable = FALSE;
BlendStateDesc.RenderTarget[0].BlendEnable = TRUE;
BlendStateDesc.RenderTarget[0].SrcBlend = D3D11_BLEND_SRC_ALPHA;
BlendStateDesc.RenderTarget[0].DestBlend = D3D11_BLEND_INV_SRC_ALPHA;
BlendStateDesc.RenderTarget[0].BlendOp = D3D11_BLEND_OP_ADD;
BlendStateDesc.RenderTarget[0].SrcBlendAlpha = D3D11_BLEND_ONE;
BlendStateDesc.RenderTarget[0].DestBlendAlpha = D3D11_BLEND_ZERO;
BlendStateDesc.RenderTarget[0].BlendOpAlpha = D3D11_BLEND_OP_ADD;
BlendStateDesc.RenderTarget[0].RenderTargetWriteMask = D3D11_COLOR_WRITE_ENABLE_ALL;
hr = m_Device->CreateBlendState(&BlendStateDesc, &m_BlendState);
FLOAT blendFactor[4] = {0.f, 0.f, 0.f, 0.f};
m_DeviceContext->OMSetBlendState(nullptr, blendFactor, 0xffffffff);
m_DeviceContext->OMSetRenderTargets(1, &m_RTV, nullptr);
m_DeviceContext->VSSetShader(m_VertexShader, nullptr, 0);
m_DeviceContext->PSSetShader(m_PixelShader, nullptr, 0);
m_DeviceContext->PSSetShaderResources(0, 1, &ShaderResource);
m_DeviceContext->PSSetSamplers(0, 1, &m_SamplerLinear);
m_DeviceContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
EDIT2 : The value of Y U V when calculated on CPU:45 200 170 and values after pixel shader which involves floating point calculations:86 141 104.
The corresponding R G B:48 45 45. What could be making the difference?

It looks like your matrix is transposed.
According to: www.martinreddy.net/gfx/faqs/colorconv.faq under [6.4] ITU.BT-601 Y'CbCr:
Y'= 0.299*R' + 0.587*G' + 0.114*B'
Cb=-0.169*R' - 0.331*G' + 0.500*B'
Cr= 0.500*R' - 0.419*G' - 0.081*B'
You misinterpreted the behavior of numpy.dot in the source you copied.
Also, it looks like #harold is correct, you should be offsetting both U and V.

Based on this Wikipedia article, to convert RGB -> YUV444 (BT.601) you should use this function:
float3 RGBtoYUV(float3 c)
{
float3 yuv;
yuv.x = dot(c, float3(0.299, 0.587, 0.114));
yuv.y = dot(c, float3(-0.14713, -0.28886, 0.436));
yuv.z = dot(c, float3(0.615, -0.51499, -0.10001));
return yuv;
}
Also, what's the format of the texture that you load into your shader?
Considering that you are using float4 rgba, yuva;, did you convert BGRA -> RGBA first?

Related

FFmpeg color correction algorithm

I'm trying to sync CSS and FFmpeg color correction. The goal is to create tool that converts CSS bri-sat-con-gam filter values to corresponding ffmpeg vals and vice versa.
e.g.
-vf "eq=brightness=0.3:saturation=1.3:contrast=1.1"
→
filter="brightness(30%) saturate(130%) contrast(110%)"
While algorithms for CSS properties are available at W3C, I have failed to find ones for FFmpeg. I've tried to dig GitHub. Starting from here I've unfolded function calls, but it looks "a bit" too hard to navigate in 20 years and 104k commits old project. :)
I'll be very grateful if anyone can help me to figure out precise formulas for brightness, saturation, contrast, and gamma. Any hints. Thx.
This is the core function:
static void create_lut(EQParameters *param)
{
int i;
double g = 1.0 / param->gamma;
double lw = 1.0 - param->gamma_weight;
for (i = 0; i < 256; i++) {
double v = i / 255.0;
v = param->contrast * (v - 0.5) + 0.5 + param->brightness;
if (v <= 0.0) {
param->lut[i] = 0;
} else {
v = v * lw + pow(v, g) * param->gamma_weight;
if (v >= 1.0)
param->lut[i] = 255;
else
param->lut[i] = 256.0 * v;
}
}
param->lut_clean = 1;
}
The filter operates only on 8-bit YUV inputs. This function creates a Look Up Table mapping all 8-bit input values 0-255 to output values. Then this table is applied to the input pixels.
The functions with names of the form set_parameter like set_gamma convert the user supplied argument to the final value used in the above function. contrast is applied only to the luma plane; saturation to the chroma planes.

Firefox WebGL 2.0 RGBA texture strange behavior

I desperately seek for somewhere to discuss this strange bug i found in firefox. But it seem hard to meet some mozilla crew.
Context is very simple: WebGL 2.0, drawing text using the well known fontmap technic with point-sprite.
One image is better than 1000 words:
On the right Chromium, all is ok, on the left Firefox... and uhhggh ?!
Questions:
Why the text is yellow in Firefox despite the fact it should be white ? ?
Why the text have strange black pixels in firefox ?
This seems to be a kind of "sharpen" filter... but WHY ?
Some details :
This is exactly the same code for both browsers.
The fontmap texture is generated using an "off-screen" canvas, this is RGBA with RGB all white and characteres printed in alpha channel. I verified the generated picture on both browsers, they are not exactly the same, but all appear OK (no strange pixel nor black border, etc...), the problem seem to be not here
The WebGL texture is RGBA/RGBA/UNSIGNED BYTE (as usual), MIN and MAG filter to NEAREST, no mipmaps, with WARP S/T Clamp to edge (but this change nothing, so it doesn't matter)
The generated texture is NPOT, but i don't think the problem is here.
The Blend equation used to render the text is (the usual) SRC_ALPHA, ONE_MINUS_SRC_ALPHA
I tested with the blend equestion SRC_ALPHA, ONE and in this case Firefox acts correctly (but additive blending is not what i want !).
Firefox Version: 55.0.2 (64 bits) Mozilla Firefox for Ubuntu
Chromium version: 60.0.3112.113 Built on Ubuntu, running on Ubuntu 16.04 (64 bits)
Here is the fragment shader (using Point-Sprite to draw each char):
precision highp float;
in float chr_c;
uniform vec4 material_col;
uniform sampler2D spl2d_col;
vec2 chr_u;
out vec4 fragColor;
void main(void) {
chr_u.x = (gl_PointCoord.x + mod(chr_c, 48.0)) * 0.020833333;
chr_u.y = (gl_PointCoord.y + floor(chr_c / 48.0)) * 0.020833333;
fragColor = texture(spl2d_col, chr_u) * material_col;
}
Here is the code used to generate the fontmap texture:
var i, s, x, y, m, w, h, a, o, mx, cv, ct;
mx = c*c; // 'c' is rows/columns count, (here: 48 rows * 48 cols)
cv = document.createElement('canvas');
ct = cv.getContext('2d');
x = 0;
y = 0;
m = 65535;
// determins the size for cells according chars size
for(i = 0; i < mx; i++) {
s = String.fromCharCode(i);
w = ct.measureText(s).width;
h = ct.measureText(s).height;
if(x < w) x = w;
if(y < h) y = h;
if(y < m && (y > 0)) m = y;
}
var r = Math.ceil((y+(y-m)>x)?y+(y-m):x);
w = r * c;
h = r * c;
cv.width = w;
cv.height = h;
ct.fillStyle = 'rgba(255,255,255,0.0)';
ct.fillRect(0, 0, w, h);
ct.font = 'normal ' + p + 'pt ' + f;
ct.fillStyle = 'rgba(255,255,255,1.0)';
ct.textAlign = 'left';
ct.textBaseline = 'top';
for(i = 0; i < mx; i++) {
a = Math.floor(i % c); // cell Easting (Abscisse a = X)
o = Math.floor(i / c); // cell Northing (Ordonnée o = y)
ct.fillText(String.fromCharCode(i), (a*r)+3, (o*r)+2);
}
var gl = this._gl;
this._blank_fnt = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, this._blank_fnt);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, ct.getImageData(0,0,w,h));
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
gl.bindTexture(gl.TEXTURE_2D, null);
cv.remove();
here is the (simplified) code used to draw the text:
gl.enable(gl.BLEND);
gl.depthMask(gl.TRUE);
gl.enable(gl.DEPTH_TEST);
gl.depthFunc(gl.LEQUAL);
gl.useProgram(le_shader);
gl.activeTexture(gl.TEXTURE0);
gl.uniformMatrix4fv(le_uniform1, le_view.matrix);
gl.uniformMatrix4fv(le_uniform2, le_transform.matrix);
gl.uniform4fv(le_uniform3, le_text.color);
gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA);
gl.bindTexture(gl.TEXTURE_2D, this._blank_fnt);
gl.bindVertexArray(le_text.vao);
gl.drawArrays(gl.POINTS, 0, le_text.count);

What's the most efficient way in WebGL to find the min and max values of an RGBA float texture?

I'm storing floating-point gpgpu values in a webgl RGBA render texture, using only the r channel to store my data (I know I should be using a more efficient texture format but that's a separate concern).
Is there any efficient way / trick / hack to find the global min and max floating-point values without resorting to gl.readPixels? Note that just exporting the floating-point data is a hassle in webgl since readPixels doesn't yet support reading gl.FLOAT values.
This is the gist of how I'm currently doing things:
if (!gl) {
gl = renderer.getContext();
fb = gl.createFramebuffer();
pixels = new Uint8Array(SIZE * SIZE * 4);
}
if (!!gl) {
// TODO: there has to be a more efficient way of doing this than via readPixels...
gl.bindFramebuffer(gl.FRAMEBUFFER, fb);
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, data.rtTemp2.__webglTexture, 0);
if (gl.checkFramebufferStatus(gl.FRAMEBUFFER) == gl.FRAMEBUFFER_COMPLETE) {
// HACK: we're pickling a single float value in every 4 bytes
// because webgl currently doesn't support reading gl.FLOAT
// textures.
gl.readPixels(0, 0, SIZE, SIZE, gl.RGBA, gl.UNSIGNED_BYTE, pixels);
var max = -100, min = 100;
for (var i = 0; i < SIZE; ++i) {
for (var j = 0; j < SIZE; ++j) {
var o = 4 * (i * SIZE + j);
var x = pixels[o + 0];
var y = pixels[o + 1] / 255.0;
var z = pixels[o + 2] / 255.0;
var v = (x <= 1 ? -1.0 : 1.0) * y;
if (z > 0.0) { v /= z; }
max = Math.max(max, v);
min = Math.min(min, v);
}
}
// ...
}
}
(using a fragment shader that ouputs floating-point data in the following format suitable for UNSIGNED_BYTE parsing...
<script id="fragmentShaderCompX" type="x-shader/x-fragment">
uniform sampler2D source1;
uniform sampler2D source2;
uniform vec2 resolution;
void main() {
vec2 uv = gl_FragCoord.xy / resolution.xy;
float v = texture2D(source1, uv).r + texture2D(source2, uv).r;
vec4 oo = vec4(1.0, abs(v), 1.0, 1.0);
if (v < 0.0) {
oo.x = 0.0;
}
v = abs(v);
if (v > 1.0) {
oo.y = 1.0;
oo.z = 1.0 / v;
}
gl_FragColor = oo;
}
</script>
Without compute shaders, the only thing that comes to mind is using a fragment shader to do that. For a 100x100 texture you could try rendering to a 20x20 grid texture, have the fragment shader do 5x5 lookups (with GL_NEAREST) to determine min and max, then download the 20x20 texture and do the rest on the CPU. Or do another pass to reduce it again. I don't know for which grid sizes it's more efficient though, you'll have to experiment. Maybe this helps, or googling "reduction gpu".
Render 1 vertex on 1x1 framebuffer and within shader sample whole previously rendered texture. That way you are testing texture on GPU which should be fast enough for real-time (or not?), however it is definitely faster than doing it on CPU, and the output would be min/max value.
I also ran across solution to try mipmap-ing texture and going through different levels.
These links might be helpful:
http://www.gamedev.net/topic/559942-glsl--find-global-min-and-max-in-texture/
http://www.opengl.org/discussion_boards/showthread.php/175692-most-efficient-way-to-get-maximum-value-in-texture
Hope this helps.

Gamma Adjustment on the HTML5 Canvas?

I found a way to increase the gamma, but no way to decrease it! This article states a formula for increasing the gamma. The formula works for increasing the gamma but not for decreasing, even if I apply the reduction on a new instance of the canvas. I tried redrawing the canvas and using a negative value for gamma calculation, but I don't get my original canvas back.
//For increasing, I tried
gamma = 0.5;
gammacorrection = 1/gamma;
r = Math.pow(255 * (r / 255), gammacorrection);
g = ...
b = ...
//For decreasing
gamma = -0.5;
gammacorrection = 1/gamma;
r = Math.pow(255 * (r / 255), gammacorrection);
g = ...
b = ...
First part works. Second doesn't.
For sake of completeness here's a working piece of code
async function adjustGamma(gamma) {
const gammaCorrection = 1 / gamma;
const canvas = document.getElementById('canvasOutput');
const ctx = canvas.getContext('2d');
const imageData = ctx.getImageData(0.0, 0.0, canvas.width, canvas.height);
const data = imageData.data;
for (var i = 0; i < data.length; i += 4) {
data[i] = 255 * Math.pow((data[i] / 255), gammaCorrection);
data[i+1] = 255 * Math.pow((data[i+1] / 255), gammaCorrection);
data[i+2] = 255 * Math.pow((data[i+2] / 255), gammaCorrection);
}
ctx.putImageData(imageData, 0, 0);
}
Here the function adjusts the gamma based on the formula in the Article linked by OP on the Canvas with id "canvasOutput"
There is no negative gamma correction. You should save the original values and use them when making gamma changes, and set gamma to 1.0 to revert back to the original.
Also note that you have the wrong order of operations (exponents come before multiplication).
var originals = { r: r, g: g, b: b };
// increase
gamma = 0.5;
gammacorrection = 1/gamma;
r = 255 * Math.pow(( originals.r / 255), gammacorrection);
g = ...
b = ...
// revert to original
gamma = 1;
gammacorrection = 1/gamma;
r = 255 * Math.pow(( originals.r / 255), gammacorrection);
g = ...
b = ...
There is no negative value for gamma. Ideally this value will range between 0.01 and 7.99. So reverting back the gamma to the original value should be possible either by creating a new canvas instance with the original values of the image, then instantiating it, or either by creating a pool of pixels with the original image and reverting back to it.
I wrote a script how would i construct the algorithm for gamma reduction.
var gamma = 0.5;
var gammaCorrection = 1 / gamma;
var canvas = document.getElementById('canvas');
var ctx = canvas.getContext('2d');
var imageData = ctx.getImageData(0.0, canvas.width, canvas.height);
function GetPixelColor(x, y) {
var index = parseInt(x + canvas.width * y) * 4;
var rgb = {
r : imageData.data[index + 0],
g : imageData.data[index + 1],
b : imageData.data[index + 2]
};
return rgb;
}
function SetPixelColor(x, y, color) {
var index = parseInt(x + this.width * y) * 4;
var data = imageData.data;
data[index+0] = color.r;
data[index+1] = color.g;
data[index+2] = color.b;
};
for (y = 0; y < canvas.height; y++) {
for (x = 0; x < canvas.width; x++) {
var color = GetPixelColor(x, y)
var newRed = Math.pow(255 * (color.r / 255), gammaCorrection);
var newGreen = Math.pow(255 * (color.g / 255), gammaCorrection);
var newBlue = Math.pow(255 * (color.b / 255), gammaCorrection);
var color = {
r: newRed,
g: newGreen,
b: newBlue
}
SetPixelColor(x, y, color);
}
}
I don't know how the application should adjust the gamma value, but i suppose it's done with a value adjuster. If so you should adjust the gamma value dynamically giving the min and max range. I didn't tested the code, this wasn't my scope, but the idea is hopefully clear.
EDIT:
To understand the principle of gamma correction first how about to define the gamma instead.
Gamma is the monitor particularity altering the pixels input. Gamma correction is the act of inverting that process for linear RGB values so that the final output remains linear. For example, if you calculated the light intensity of an object is 0.5, you don't store the result as 0.5 in the pixel. Store it as pow(0.5, 1.0/2.2) = 0.73. When you send 0.73 to the monitor, it will apply a gamma on the value and produce pow(0.73, 2.2) = 0.5, which is what you want. To do this, you apply the inverse gamma function.
o=pow(i, 1.0/gamma)
Where
o is the output value.
i is the input value.
gamma is the gamma value used by your monitor.
So the gamma correction is nothing else than the rise of input value to the power of inverse of gamma. So to restore the gamma to the original value you apply the formula before the gamma correction has been applied.
The blue line represents the inverse gamma curve you need to apply to your pixels before they're sent to the monitor. When your monitor applies its gamma curve (red line) to the pixels, the result is a linear line (green line) that represents your intended RGB pixel values.

Smooth spectrum for Mandelbrot Set rendering

I'm currently writing a program to generate really enormous (65536x65536 pixels and above) Mandelbrot images, and I'd like to devise a spectrum and coloring scheme that does them justice. The wikipedia featured mandelbrot image seems like an excellent example, especially how the palette remains varied at all zoom levels of the sequence. I'm not sure if it's rotating the palette or doing some other trick to achieve this, though.
I'm familiar with the smooth coloring algorithm for the mandelbrot set, so I can avoid banding, but I still need a way to assign colors to output values from this algorithm.
The images I'm generating are pyramidal (eg, a series of images, each of which has half the dimensions of the previous one), so I can use a rotating palette of some sort, as long as the change in the palette between subsequent zoom levels isn't too obvious.
This is the smooth color algorithm:
Lets say you start with the complex number z0 and iterate n times until it escapes. Let the end point be zn.
A smooth value would be
nsmooth := n + 1 - Math.log(Math.log(zn.abs()))/Math.log(2)
This only works for mandelbrot, if you want to compute a smooth function for julia sets, then use
Complex z = new Complex(x,y);
double smoothcolor = Math.exp(-z.abs());
for(i=0;i<max_iter && z.abs() < 30;i++) {
z = f(z);
smoothcolor += Math.exp(-z.abs());
}
Then smoothcolor is in the interval (0,max_iter).
Divide smoothcolor with max_iter to get a value between 0 and 1.
To get a smooth color from the value:
This can be called, for example (in Java):
Color.HSBtoRGB(0.95f + 10 * smoothcolor ,0.6f,1.0f);
since the first value in HSB color parameters is used to define the color from the color circle.
Use the smooth coloring algorithm to calculate all of the values within the viewport, then map your palette from the lowest to highest value. Thus, as you zoom in and the higher values are no longer visible, the palette will scale down as well. With the same constants for n and B you will end up with a range of 0.0 to 1.0 for a fully zoomed out set, but at deeper zooms the dynamic range will shrink, to say 0.0 to 0.1 at 200% zoom, 0.0 to 0.0001 at 20000% zoom, etc.
Here is a typical inner loop for a naive Mandelbrot generator. To get a smooth colour you want to pass in the real and complex "lengths" and the iteration you bailed out at. I've included the Mandelbrot code so you can see which vars to use to calculate the colour.
for (ix = 0; ix < panelMain.Width; ix++)
{
cx = cxMin + (double )ix * pixelWidth;
// init this go
zx = 0.0;
zy = 0.0;
zx2 = 0.0;
zy2 = 0.0;
for (i = 0; i < iterationMax && ((zx2 + zy2) < er2); i++)
{
zy = zx * zy * 2.0 + cy;
zx = zx2 - zy2 + cx;
zx2 = zx * zx;
zy2 = zy * zy;
}
if (i == iterationMax)
{
// interior, part of set, black
// set colour to black
g.FillRectangle(sbBlack, ix, iy, 1, 1);
}
else
{
// outside, set colour proportional to time/distance it took to converge
// set colour not black
SolidBrush sbNeato = new SolidBrush(MapColor(i, zx2, zy2));
g.FillRectangle(sbNeato, ix, iy, 1, 1);
}
and MapColor below: (see this link to get the ColorFromHSV function)
private Color MapColor(int i, double r, double c)
{
double di=(double )i;
double zn;
double hue;
zn = Math.Sqrt(r + c);
hue = di + 1.0 - Math.Log(Math.Log(Math.Abs(zn))) / Math.Log(2.0); // 2 is escape radius
hue = 0.95 + 20.0 * hue; // adjust to make it prettier
// the hsv function expects values from 0 to 360
while (hue > 360.0)
hue -= 360.0;
while (hue < 0.0)
hue += 360.0;
return ColorFromHSV(hue, 0.8, 1.0);
}
MapColour is "smoothing" the bailout values from 0 to 1 which then can be used to map a colour without horrible banding. Playing with MapColour and/or the hsv function lets you alter what colours are used.
Seems simple to do by trial and error. Assume you can define HSV1 and HSV2 (hue, saturation, value) of the endpoint colors you wish to use (black and white; blue and yellow; dark red and light green; etc.), and assume you have an algorithm to assign a value P between 0.0 and 1.0 to each of your pixels. Then that pixel's color becomes
(H2 - H1) * P + H1 = HP
(S2 - S1) * P + S1 = SP
(V2 - V1) * P + V1 = VP
With that done, just observe the results and see how you like them. If the algorithm to assign P is continuous, then the gradient should be smooth as well.
My eventual solution was to create a nice looking (and fairly large) palette and store it as a constant array in the source, then interpolate between indexes in it using the smooth coloring algorithm. The palette wraps (and is designed to be continuous), but this doesn't appear to matter much.
What's going on with the color mapping in that image is that it's using a 'log transfer function' on the index (according to documentation). How exactly it's doing it I still haven't figured out yet. The program that produced it uses a palette of 400 colors, so index ranges [0,399), wrapping around if needed. I've managed to get pretty close to matching it's behavior. I use an index range of [0,1) and map it like so:
double value = Math.log(0.021 * (iteration + delta + 60)) + 0.72;
value = value - Math.floor(value);
It's kind of odd that I have to use these special constants in there to get my results to match, since I doubt they do any of that. But whatever works in the end, right?
here you can find a version with javascript
usage :
var rgbcol = [] ;
var rgbcol = MapColor ( Iteration , Zy2,Zx2 ) ;
point ( ctx , iX, iY ,rgbcol[0],rgbcol[1],rgbcol[2] );
function
/*
* The Mandelbrot Set, in HTML5 canvas and javascript.
* https://github.com/cslarsen/mandelbrot-js
*
* Copyright (C) 2012 Christian Stigen Larsen
*/
/*
* Convert hue-saturation-value/luminosity to RGB.
*
* Input ranges:
* H = [0, 360] (integer degrees)
* S = [0.0, 1.0] (float)
* V = [0.0, 1.0] (float)
*/
function hsv_to_rgb(h, s, v)
{
if ( v > 1.0 ) v = 1.0;
var hp = h/60.0;
var c = v * s;
var x = c*(1 - Math.abs((hp % 2) - 1));
var rgb = [0,0,0];
if ( 0<=hp && hp<1 ) rgb = [c, x, 0];
if ( 1<=hp && hp<2 ) rgb = [x, c, 0];
if ( 2<=hp && hp<3 ) rgb = [0, c, x];
if ( 3<=hp && hp<4 ) rgb = [0, x, c];
if ( 4<=hp && hp<5 ) rgb = [x, 0, c];
if ( 5<=hp && hp<6 ) rgb = [c, 0, x];
var m = v - c;
rgb[0] += m;
rgb[1] += m;
rgb[2] += m;
rgb[0] *= 255;
rgb[1] *= 255;
rgb[2] *= 255;
rgb[0] = parseInt ( rgb[0] );
rgb[1] = parseInt ( rgb[1] );
rgb[2] = parseInt ( rgb[2] );
return rgb;
}
// http://stackoverflow.com/questions/369438/smooth-spectrum-for-mandelbrot-set-rendering
// alex russel : http://stackoverflow.com/users/2146829/alex-russell
function MapColor(i,r,c)
{
var di= i;
var zn;
var hue;
zn = Math.sqrt(r + c);
hue = di + 1.0 - Math.log(Math.log(Math.abs(zn))) / Math.log(2.0); // 2 is escape radius
hue = 0.95 + 20.0 * hue; // adjust to make it prettier
// the hsv function expects values from 0 to 360
while (hue > 360.0)
hue -= 360.0;
while (hue < 0.0)
hue += 360.0;
return hsv_to_rgb(hue, 0.8, 1.0);
}

Resources