How to get the correct `RGB` value of a `PNG` image? - image

Mapbox provides Global elevation data with height data encoded in PNG image. Height is decoded by height = -10000 + ((R * 256 * 256 + G * 256 + B) * 0.1). Details are in https://www.mapbox.com/blog/terrain-rgb/.
I want to import the height data to generate terrains in Unity3D.
Texture2D dem = (Texture2D)AssetDatabase.LoadAssetAtPath("Assets/dem/12/12_3417_1536.png", typeof(Texture2D));
for (int i = 0; i < width; i++)
for (int j = 0; j < height; j++)
{
Color c = dem.GetPixel(i, j);
float R = c.r*255;
float G = c.g*255;
float B = c.b*255;
array[i, j] = -10000 + ((R * 256 * 256 + G * 256 + B) * 0.1f);
}
Here I set a break point and the rgba value of the first pixel is RGBA(0.000, 0.592, 0.718, 1.000). c.r is 0. The height is incorrect as this point represent the height of somewhere on a mountain.
Then I open the image in Photoshop and get RGB of the first pixel: R=1,G=152,B=179.
I write a test program in C#.
System.Drawing.Bitmap bitmap = new System.Drawing.Bitmap("12_3417_1536.png");
Color a = bitmap.GetPixel(0, 0);
It shows Color a is (R,G,B,A)=(1,147,249,255)
Here is the image I test:
https://api.mapbox.com/v4/mapbox.terrain-rgb/12/3417/1536.pngraw?access_token=pk.eyJ1Ijoib2xlb3RpZ2VyIiwiYSI6ImZ2cllZQ3cifQ.2yDE9wUcfO_BLiinccfOKg
Why I got different RGBA value with different method? Which one is correct?
According to the comments below, different read order and compressed data in unity may result in different value of the rgba of pixel at (0,0).
Now I want to focus on----How to convert the rgba(0~1) to RGBA(0~255)?
r_ps=r_unity*255? But how can I explain r=0 in unity and r=1 in PS of pixel at (0,0)
?

Try disabling compression from the texture's import settings in Unity (No compression). Alternatively, if you fetch the data at runtime, you can use Texture.LoadBytes() to avoid compression artifacts.

I will assume you are using the same picture and that there aren't two 12_3417_1536.png files in separate folders.
Each of these functions has a different concept of which pixel is at (0,0). Not sure what you mean by "first" pixel when you tested with photoshop, but Texture coordinates in unity start at lower left corner.
When I tested the lower left corner pixel using paint, I got the same value as you did with photoshop. However, if you test the upper left corner, you get (1,147,249,255) which is the result bitmap.GetPixel returns.
The unity values that you're getting seem to be way off. Try calling dem.GetPixel(0,0) so that you're sure you're analyzing the simplest case.

Related

Processing: Efficiently create uniform grid

I'm trying to create a grid of an image (in the way one would tile a background with). Here's what I've been using:
PImage bgtile;
PGraphics bg;
int tilesize = 50;
void setup() {
int t = millis();
fullScreen(P2D);
background(0);
bgtile = loadImage("bgtile.png");
int bgw = ceil( ((float) width) / tilesize) + 1;
int bgh = ceil( ((float) height) / tilesize) + 1;
bg = createGraphics(bgw*tilesize,bgh*tilesize);
bg.beginDraw();
for(int i = 0; i < bgw; i++){
for(int j = 0; j < bgh; j++){
bg.image(bgtile, i*tilesize, j*tilesize, tilesize, tilesize);
}
}
bg.endDraw();
print(millis() - t);
}
The timing code says that this takes about a quarter of a second, but by my count there's a full second once the window opens before anything shows up on screen (which should happen as soon as draw is first run). Is there a faster way to get this same effect? (I want to avoid rendering bgtile hundreds of times in the draw loop for obvious reasons)
One way could be to make use of the GPU and let OpenGL repeat a texture for you.
Processing makes it fairly easy to repeat a texture via textureWrap(REPEAT)
Instead of drawing an image you'd make your own quad shape and instead of calling vertex(x, y) for example, you'd call vertex(x, y, u, v); passing texture coordinates (more low level info on the OpenGL link above). The simple idea is x,y would control the geometry on screen and u,v would control how the texture is applied to the geometry.
Another thing you can control is textureMode() which allows you control how you specify the texture coordinates (U, V):
IMAGE mode is the default: you use pixel coordinates (based on the dimensions of the texture)
NORMAL mode uses values between 0.0 and 1.0 (also known as normalised values) where 1.0 means the maximum the texture can go (e.g. image width for U or image height for V) and you don't need to worry about knowing the texture image dimensions
Here's a basic example based on the textureMode() example above:
PImage img;
void setup() {
fullScreen(P2D);
noStroke();
img = loadImage("https://processing.org/examples/moonwalk.jpg");
// texture mode can be IMAGE (pixel dimensions) or NORMAL (0.0 to 1.0)
// normal means 1.0 is full width (for U) or height (for V) without having to know the image resolution
textureMode(NORMAL);
// this is what will make handle tiling for you
textureWrap(REPEAT);
}
void draw() {
// drag mouse on X axis to change tiling
int tileRepeats = (int)map(constrain(mouseX,0,width), 0, width, 1, 100);
// draw a textured quad
beginShape(QUAD);
// set the texture
texture(img);
// x , y , U , V
vertex(0 , 0 , 0 , 0);
vertex(width, 0 , tileRepeats, 0);
vertex(width, height, tileRepeats, tileRepeats);
vertex(0 , height, 0 , tileRepeats);
endShape();
text((int)frameRate+"fps",15,15);
}
Drag the mouse on the Y axis to control the number of repetitions.
In this simple example both vertex coordinates and texture coordinates are going clockwise (top left, top right, bottom right, bottom left order).
There are probably other ways to achieve the same result: using a PShader comes to mind.
Your approach caching the tiles in setup is ok.
Even flattening your nested loop into a single loop at best may only shave a few milliseconds off, but nothing substantial.
If you tried to cache my snippet above it would make a minimal difference.
In this particular case, because of the back and forth between Java/OpenGL (via JOGL), as far as I can tell using VisualVM, it looks like there's not a lot of room for improvement since simply swapping buffers takes so long (e.g. bg.image()):
An easy way to do this would be to use processing's built in get(); which saves a PImage of the coordinates you pass, for example: PImage pic = get(0, 0, width, height); will capture a "screenshot" of your entire window. So, you can create the image like you already are, and then take a screenshot and display that screenshot.
PImage bgtile;
PGraphics bg;
PImage screenGrab;
int tilesize = 50;
void setup() {
fullScreen(P2D);
background(0);
bgtile = loadImage("bgtile.png");
int bgw = ceil(((float) width) / tilesize) + 1;
int bgh = ceil(((float) height) / tilesize) + 1;
bg = createGraphics(bgw * tilesize, bgh * tilesize);
bg.beginDraw();
for (int i = 0; i < bgw; i++) {
for (int j = 0; j < bgh; j++) {
bg.image(bgtile, i * tilesize, j * tilesize, tilesize, tilesize);
}
}
bg.endDraw();
screenGrab = get(0, 0, width, height);
}
void draw() {
image(screenGrab, 0, 0);
}
This will still take a little bit to generate the image, but once it does, there is no need to use the for loops again unless you change the tilesize.
#George Profenza's answer looks more efficient than my solution, but mine may take a little less modification to the code you already have.

OpenCV : Transparent area of imported .png file is now white

I'm trying to develop a small and simplistic webcam-controlled game, where the user moves a figure on the x-axis by tracking a lighting source with the webcam (flashlight eg.)
So far my code generates a target object every couple of seconds at a random location in the picture.
That object is stored as a Mat via
Mat target = imread("target.png");
In order to paint the object onto the background image, I'm using
bgClear.copyTo(temp);
for(int i = targetX; i < target.cols + targetX; i++){
for(int j = targetY; j < target.rows + targetY; j++){
temp.at<Vec3b>(j,i) = target.at<Vec3b>(j-targetY,i-targetX);
}
}
temp.copyTo(bg);
where bgClear represents the clean background, temp the background copy that is being edited and bg the final background thats being shown. including the object.
targetX and targetY are the starting coordinates of the object (whereas targetX is randomly generated beforehand so that the object spawns at a random location in the upper half of the image), relative to the background. (so I'm not iterating through the whole background, only the range of the object).
It works so far, but I have a problem:
The transparent area of the imported image is now white, and I dont seem to be able to fix it by checking the pixel values with something like
if(target.at<Vec3b>(Point(j-targetY,i-targetX))[0] != 255 &&
target.at<Vec3b>(Point(j-targetY,i-targetX))[1] != 255 &&
target.at<Vec3b>(Point(j-targetY,i-targetX))[2] != 255)
before I am actually replacing the pixel.
I've also tried loading the .png file by adding the -1 flag (alpha channel), but then the image just seems ghosty and can barely be seen.
In case I might you have problems imaging what I'm talking about, here's a partial screenshot of it: Screenshot
Any advice on how I might fix this ?
Regards,
Daniel
You need to handle transparency manually. General idea is, while copying to temp only copy pixels that are opaque i.e. alpha value is high.
use CV_LOAD_IMAGE_UNCHANGED (= -1) in imread.
split target to four single channel image using split.
merge first three channels to form a BGR image using merge.
in the paint loop, use newly formed BGR image as source and the unmerged fourth channel (alpha) as mask.
...as I was mentioning in my comment to asif's helpful answer:
Mat target = imread("target", CV_LOAD_IMAGE_UNCHANGED); // load image
Mat targetBGR(target.rows, target.cols, CV_8UC3); // create BGR mat
Mat targetAlpha(target.rows, target.cols, CV_8UC1); // create alpha mat
Mat out[] = {targetBGR, targetAlpha}; // create array of matrices
int from_to[] = { 0,0, 1,1, 2,2, 3,3 }; // create array of index pairs
mixChannels( &target, 1, out, 2, from_to, 4 ); // finally split target into 3
channel BGR plus 1 channel Alpha
...as described in this example. (minus the R-B-channel-swapping).
...later in the pixel-processing loop:
if(targetAlpha.at<uchar>(j-targetY,i-targetX) > 0)
temp.at<Vec3b>(j,i) = targetBGR.at<Vec3b>(j-targetY,i-targetX);
Working like a charm!

How does this MATLAB code make an image into a binary image?

v = videoinput('winvideo', 1, 'YUY2_320x240');
s = serial('COM1', 'BaudRate', 9600);
fopen(s);
while(1)
h = getsnapshot(v);
rgb = ycbcr2rgb(h);
for i = 1:240
for j = 1:320
if rgb(i,j,1) > 140 && rgb(i,j,2) < 100 % use ur own conditions
bm(i, j) = 1;
else
bm(i, j) = 0;
end
end
end
This is the code i got from my senior regarding image processing using MATLAB. The above code is to convert the image to binary image, But in the code rgb(i, j, 1) > 140 I didn't understand that command. How to select that 140 and what does that rgb(i, j, 1) mean?
You have an RGB image rgb where the third dimension are the RGB color planes. Thus, rgb(i,j,1) is the red value at row i, column j.
By doing rgb(i,j,1)>140 it tests if this red value is greater than 140. The value 140 appears to be ad hoc, picked for a specific task.
The code is extremely inefficient as there is no need for a loop:
bm = rgb(:,:,1)>140 & rgb(:,:,2)<100;
Note the change from && to the element-wise operator &. Here I'm assuming that the size of rgb is 240x320x3.
Edit: The threshold values you choose completely depend on the task, but a common approach to automatic thresholding is is Otsu's method, graythresh. You can apply it to a single color plane to get a threshold:
redThresh = graythresh(rgb(:,:,1)) * 255;
Note that graythresh returns a value on [0,1], so you have to scale that by the data range.

How does depth work in a frustum environment?

I need some help understanding the basics of a frustum transformation. Mainly, how depth works.
The following uses a viewport of 768x1024. Using an Orthogonal projection and a square of 768x768 (z defaults to 0) with no translation or scaling, and a viewport of glViewport(0, 0, 768, 1024) this square easily fills the width of the frame:
Now when I change the project to a frustum and mess with the z translation, the square scales appropriately due to the perspective changes.
Here is the same square in such an environment:
I can play with this z translation, as well as the near and far parameters of the frustum matrix and make the square change is apparent onscreen size accordingly. Fine.
But what I cannot figure out is the obvious relationship between its onscreen size and these depth parameters.
For example, suppose I want to use a frustum but have the square fill the frame width, as in my first example image above. How to achieve this?
I would think that if the z translation matched the near plane, then you'd essentially have a square "right in front of the camera", filling the frame. But I cannot figure a way to achieve this. If my near is 1 and my z translation is -1, then the square should be sitting right on the near plane itself (right?!) , filling the width of the frame (where the frustum's left and right planes are the same as the orthogonal projection).
I could paste a bunch of code here to show what I'm doing but I think the concept here is clear. I just want to figure out where the near plane actually is, how to situate something on it, as this will help me understand how the frustum is working.
Okay here is the relevant code I'm using, where width=768 and height=1024.
My vertex shader is the simple gl_Position=Projection*Modelview*Position;
My projection matrix (frustum) is thus:
Frustum(-width/2, width/2, -height/2, height/2, 1,10);
This function is:
static Matrix4<T> Frustum(T left, T right, T bottom, T top, T near, T far)
{
T a = 2 * near / (right - left);
T b = 2 * near / (top - bottom);
T c = (right + left) / (right - left);
T d = (top + bottom) / (top - bottom);
T e = - (far + near) / (far - near);
T f = -2 * far * near / (far - near);
Matrix4 m;
m.x.x = a; m.x.y = 0; m.x.z = 0; m.x.w = 0;
m.y.x = 0; m.y.y = b; m.y.z = 0; m.y.w = 0;
m.z.x = c; m.z.y = d; m.z.z = e; m.z.w = -1;
m.w.x = 0; m.w.y = 0; m.w.z = f; m.w.w = 1;
return m;
}
My square is just two 2d triangles with a default z=0, and an x range from left as -768/2 and right edge at 768/2. The square is clearly working properly as my first image above shows, using the orthogonal projection. (Though I switched to the frustum projection for this question)
To draw the square, I translate the Modelview with:
Translate(0, 0, -1);
Using:
static Matrix4<T> Translate(T x, T y, T z)
{
Matrix4 m;
m.x.x = 1; m.x.y = 0; m.x.z = 0; m.x.w = 0;
m.y.x = 0; m.y.y = 1; m.y.z = 0; m.y.w = 0;
m.z.x = 0; m.z.y = 0; m.z.z = 1; m.z.w = 0;
m.w.x = x; m.w.y = y; m.w.z = z; m.w.w = 1;
return m;
}
As you can see, the translation should put the square on the near plane, yet it looks like this:
If I translate instead of -1.01 just to be sure I avoid near clipping, the result is the same. If I do not translate, thus z=0, the square does not appear, as you'd expect, since it would be behind the camera.
In your frustum matrix, m.w.w should be 0, not 1. This will fix your problem.
But, the mistake isn't your fault. It's my fault! I'm actually the one who wrote that code in the first place, and unfortunately it has proliferated. It's an errata in my book (iPhone 3D Programming), which is where it first appeared.
Feeling very guilty about this!
If my near is 1 and my z translation is -1, then the square should be sitting right on the near plane itself (right?!)
Yes
, filling the width of the frame (where the frustum's left and right planes are the same as the orthogonal projection).
Not neccesarily. The near plane has the extents given with the left, right, bottom and top parameters of glFrustum. A rectangle going to exactly those bounds will snugly fit the viewport when being placed at the near plane distance.

Programmatically Lighten a Color

Motivation
I'd like to find a way to take an arbitrary color and lighten it a few shades, so that I can programatically create a nice gradient from the one color to a lighter version. The gradient will be used as a background in a UI.
Possibility 1
Obviously I can just split out the RGB values and increase them individually by a certain amount. Is this actually what I want?
Possibility 2
My second thought was to convert the RGB to HSV/HSB/HSL (Hue, Saturation, Value/Brightness/Lightness), increase the brightness a bit, decrease the saturation a bit, and then convert it back to RGB. Will this have the desired effect in general?
As Wedge said, you want to multiply to make things brighter, but that only works until one of the colors becomes saturated (i.e. hits 255 or greater). At that point, you can just clamp the values to 255, but you'll be subtly changing the hue as you get lighter. To keep the hue, you want to maintain the ratio of (middle-lowest)/(highest-lowest).
Here are two functions in Python. The first implements the naive approach which just clamps the RGB values to 255 if they go over. The second redistributes the excess values to keep the hue intact.
def clamp_rgb(r, g, b):
return min(255, int(r)), min(255, int(g)), min(255, int(b))
def redistribute_rgb(r, g, b):
threshold = 255.999
m = max(r, g, b)
if m <= threshold:
return int(r), int(g), int(b)
total = r + g + b
if total >= 3 * threshold:
return int(threshold), int(threshold), int(threshold)
x = (3 * threshold - total) / (3 * m - total)
gray = threshold - x * m
return int(gray + x * r), int(gray + x * g), int(gray + x * b)
I created a gradient starting with the RGB value (224,128,0) and multiplying it by 1.0, 1.1, 1.2, etc. up to 2.0. The upper half is the result using clamp_rgb and the bottom half is the result with redistribute_rgb. I think it's easy to see that redistributing the overflows gives a much better result, without having to leave the RGB color space.
For comparison, here's the same gradient in the HLS and HSV color spaces, as implemented by Python's colorsys module. Only the L component was modified, and clamping was performed on the resulting RGB values. The results are similar, but require color space conversions for every pixel.
I would go for the second option. Generally speaking the RGB space is not really good for doing color manipulation (creating transition from one color to an other, lightening / darkening a color, etc). Below are two sites I've found with a quick search to convert from/to RGB to/from HSL:
from the "Fundamentals of Computer Graphics"
some sourcecode in C# - should be easy to adapt to other programming languages.
In C#:
public static Color Lighten(Color inColor, double inAmount)
{
return Color.FromArgb(
inColor.A,
(int) Math.Min(255, inColor.R + 255 * inAmount),
(int) Math.Min(255, inColor.G + 255 * inAmount),
(int) Math.Min(255, inColor.B + 255 * inAmount) );
}
I've used this all over the place.
ControlPaint class in System.Windows.Forms namespace has static methods Light and Dark:
public static Color Dark(Color baseColor, float percOfDarkDark);
These methods use private implementation of HLSColor. I wish this struct was public and in System.Drawing.
Alternatively, you can use GetHue, GetSaturation, GetBrightness on Color struct to get HSB components. Unfortunately, I didn't find the reverse conversion.
Convert it to RGB and linearly interpolate between the original color and the target color (often white). So, if you want 16 shades between two colors, you do:
for(i = 0; i < 16; i++)
{
colors[i].R = start.R + (i * (end.R - start.R)) / 15;
colors[i].G = start.G + (i * (end.G - start.G)) / 15;
colors[i].B = start.B + (i * (end.B - start.B)) / 15;
}
In order to get a lighter or a darker version of a given color you should modify its brightness. You can do this easily even without converting your color to HSL or HSB color. For example to make a color lighter you can use the following code:
float correctionFactor = 0.5f;
float red = (255 - color.R) * correctionFactor + color.R;
float green = (255 - color.G) * correctionFactor + color.G;
float blue = (255 - color.B) * correctionFactor + color.B;
Color lighterColor = Color.FromArgb(color.A, (int)red, (int)green, (int)blue);
If you need more details, read the full story on my blog.
Converting to HS(LVB), increasing the brightness and then converting back to RGB is the only way to reliably lighten the colour without effecting the hue and saturation values (ie to only lighten the colour without changing it in any other way).
A very similar question, with useful answers, was asked previously:
How do I determine darker or lighter color variant of a given color?
Short answer: multiply the RGB values by a constant if you just need "good enough", translate to HSV if you require accuracy.
I used Andrew's answer and Mark's answer to make this (as of 1/2013 no range input for ff).
function calcLightness(l, r, g, b) {
var tmp_r = r;
var tmp_g = g;
var tmp_b = b;
tmp_r = (255 - r) * l + r;
tmp_g = (255 - g) * l + g;
tmp_b = (255 - b) * l + b;
if (tmp_r > 255 || tmp_g > 255 || tmp_b > 255)
return { r: r, g: g, b: b };
else
return { r:parseInt(tmp_r), g:parseInt(tmp_g), b:parseInt(tmp_b) }
}
I've done this both ways -- you get much better results with Possibility 2.
Any simple algorithm you construct for Possibility 1 will probably work well only for a limited range of starting saturations.
You would want to look into Poss 1 if (1) you can restrict the colors and brightnesses used, and (2) you are performing the calculation a lot in a rendering.
Generating the background for a UI won't need very many shading calculations, so I suggest Poss 2.
-Al.
IF you want to produce a gradient fade-out, I would suggest the following optimization: Rather than doing RGB->HSB->RGB for each individual color you should only calculate the target color. Once you know the target RGB, you can simply calculate the intermediate values in RGB space without having to convert back and forth. Whether you calculate a linear transition of use some sort of curve is up to you.
Method 1: Convert RGB to HSL, adjust HSL, convert back to RGB.
Method 2: Lerp the RGB colour values - http://en.wikipedia.org/wiki/Lerp_(computing)
See my answer to this similar question for a C# implementation of method 2.
Pretend that you alpha blended to white:
oneMinus = 1.0 - amount
r = amount + oneMinus * r
g = amount + oneMinus * g
b = amount + oneMinus * b
where amount is from 0 to 1, with 0 returning the original color and 1 returning white.
You might want to blend with whatever the background color is if you are lightening to display something disabled:
oneMinus = 1.0 - amount
r = amount * dest_r + oneMinus * r
g = amount * dest_g + oneMinus * g
b = amount * dest_b + oneMinus * b
where (dest_r, dest_g, dest_b) is the color being blended to and amount is from 0 to 1, with zero returning (r, g, b) and 1 returning (dest.r, dest.g, dest.b)
I didn't find this question until after it became a related question to my original question.
However, using insight from these great answers. I pieced together a nice two-liner function for this:
Programmatically Lighten or Darken a hex color (or rgb, and blend colors)
Its a version of method 1. But with over saturation taken into account. Like Keith said in his answer above; use Lerp to seemly solve the same problem Mark mentioned, but without redistribution. The results of shadeColor2 should be much closer to doing it the right way with HSL, but without the overhead.
A bit late to the party, but if you use javascript or nodejs, you can use tinycolor library, and manipulate the color the way you want:
tinycolor("red").lighten().desaturate().toHexString() // "#f53d3d"
I would have tried number #1 first, but #2 sounds pretty good. Try doing it yourself and see if you're satisfied with the results, it sounds like it'll take you maybe 10 minutes to whip up a test.
Technically, I don't think either is correct, but I believe you want a variant of option #2. The problem being that taken RGB 990000 and "lightening" it would really just add onto the Red channel (Value, Brightness, Lightness) until you got to FF. After that (solid red), it would be taking down the saturation to go all the way to solid white.
The conversions get annoying, especially since you can't go direct to and from RGB and Lab, but I think you really want to separate the chrominance and luminence values, and just modify the luminence to really achieve what you want.
Here's an example of lightening an RGB colour in Python:
def lighten(hex, amount):
""" Lighten an RGB color by an amount (between 0 and 1),
e.g. lighten('#4290e5', .5) = #C1FFFF
"""
hex = hex.replace('#','')
red = min(255, int(hex[0:2], 16) + 255 * amount)
green = min(255, int(hex[2:4], 16) + 255 * amount)
blue = min(255, int(hex[4:6], 16) + 255 * amount)
return "#%X%X%X" % (int(red), int(green), int(blue))
This is based on Mark Ransom's answer.
Where the clampRGB function tries to maintain the hue, it however miscalculates the scaling to keep the same luminance. This is because the calculation directly uses sRGB values which are not linear.
Here's a Java version that does the same as clampRGB (although with values ranging from 0 to 1) that maintains luminance as well:
private static Color convertToDesiredLuminance(Color input, double desiredLuminance) {
if(desiredLuminance > 1.0) {
return Color.WHITE;
}
if(desiredLuminance < 0.0) {
return Color.BLACK;
}
double ratio = desiredLuminance / luminance(input);
double r = Double.isInfinite(ratio) ? desiredLuminance : toLinear(input.getRed()) * ratio;
double g = Double.isInfinite(ratio) ? desiredLuminance : toLinear(input.getGreen()) * ratio;
double b = Double.isInfinite(ratio) ? desiredLuminance : toLinear(input.getBlue()) * ratio;
if(r > 1.0 || g > 1.0 || b > 1.0) { // anything outside range?
double br = Math.min(r, 1.0); // base values
double bg = Math.min(g, 1.0);
double bb = Math.min(b, 1.0);
double rr = 1.0 - br; // ratios between RGB components to maintain
double rg = 1.0 - bg;
double rb = 1.0 - bb;
double x = (desiredLuminance - luminance(br, bg, bb)) / luminance(rr, rg, rb);
r = 0.0001 * Math.round(10000.0 * (br + rr * x));
g = 0.0001 * Math.round(10000.0 * (bg + rg * x));
b = 0.0001 * Math.round(10000.0 * (bb + rb * x));
}
return Color.color(toGamma(r), toGamma(g), toGamma(b));
}
And supporting functions:
private static double toLinear(double v) { // inverse is #toGamma
return v <= 0.04045 ? v / 12.92 : Math.pow((v + 0.055) / 1.055, 2.4);
}
private static double toGamma(double v) { // inverse is #toLinear
return v <= 0.0031308 ? v * 12.92 : 1.055 * Math.pow(v, 1.0 / 2.4) - 0.055;
}
private static double luminance(Color c) {
return luminance(toLinear(c.getRed()), toLinear(c.getGreen()), toLinear(c.getBlue()));
}
private static double luminance(double r, double g, double b) {
return r * 0.2126 + g * 0.7152 + b * 0.0722;
}

Resources