Related
BOUNTY STATUS UPDATE:
I discovered how to map a linear lens, from destination coordinates to source coordinates.
How do you calculate the radial distance from the centre to go from fisheye to rectilinear?
1). I actually struggle to reverse it, and to map source coordinates to destination coordinates. What is the inverse, in code in the style of the converting functions I posted?
2). I also see that my undistortion is imperfect on some lenses - presumably those that are not strictly linear. What is the equivalent to-and-from source-and-destination coordinates for those lenses? Again, more code than just mathematical formulae please...
Question as originally stated:
I have some points that describe positions in a picture taken with a fisheye lens.
I want to convert these points to rectilinear coordinates. I want to undistort the image.
I've found this description of how to generate a fisheye effect, but not how to reverse it.
There's also a blog post that describes how to use tools to do it; these pictures are from that:
(1) : SOURCE Original photo link
Input : Original image with fish-eye distortion to fix.
(2) : DESTINATION Original photo link
Output : Corrected image (technically also with perspective correction, but that's a separate step).
How do you calculate the radial distance from the centre to go from fisheye to rectilinear?
My function stub looks like this:
Point correct_fisheye(const Point& p,const Size& img) {
// to polar
const Point centre = {img.width/2,img.height/2};
const Point rel = {p.x-centre.x,p.y-centre.y};
const double theta = atan2(rel.y,rel.x);
double R = sqrt((rel.x*rel.x)+(rel.y*rel.y));
// fisheye undistortion in here please
//... change R ...
// back to rectangular
const Point ret = Point(centre.x+R*cos(theta),centre.y+R*sin(theta));
fprintf(stderr,"(%d,%d) in (%d,%d) = %f,%f = (%d,%d)\n",p.x,p.y,img.width,img.height,theta,R,ret.x,ret.y);
return ret;
}
Alternatively, I could somehow convert the image from fisheye to rectilinear before finding the points, but I'm completely befuddled by the OpenCV documentation. Is there a straightforward way to do it in OpenCV, and does it perform well enough to do it to a live video feed?
The description you mention states that the projection by a pin-hole camera (one that does not introduce lens distortion) is modeled by
R_u = f*tan(theta)
and the projection by common fisheye lens cameras (that is, distorted) is modeled by
R_d = 2*f*sin(theta/2)
You already know R_d and theta and if you knew the camera's focal length (represented by f) then correcting the image would amount to computing R_u in terms of R_d and theta. In other words,
R_u = f*tan(2*asin(R_d/(2*f)))
is the formula you're looking for. Estimating the focal length f can be solved by calibrating the camera or other means such as letting the user provide feedback on how well the image is corrected or using knowledge from the original scene.
In order to solve the same problem using OpenCV, you would have to obtain the camera's intrinsic parameters and lens distortion coefficients. See, for example, Chapter 11 of Learning OpenCV (don't forget to check the correction). Then you can use a program such as this one (written with the Python bindings for OpenCV) in order to reverse lens distortion:
#!/usr/bin/python
# ./undistort 0_0000.jpg 1367.451167 1367.451167 0 0 -0.246065 0.193617 -0.002004 -0.002056
import sys
import cv
def main(argv):
if len(argv) < 10:
print 'Usage: %s input-file fx fy cx cy k1 k2 p1 p2 output-file' % argv[0]
sys.exit(-1)
src = argv[1]
fx, fy, cx, cy, k1, k2, p1, p2, output = argv[2:]
intrinsics = cv.CreateMat(3, 3, cv.CV_64FC1)
cv.Zero(intrinsics)
intrinsics[0, 0] = float(fx)
intrinsics[1, 1] = float(fy)
intrinsics[2, 2] = 1.0
intrinsics[0, 2] = float(cx)
intrinsics[1, 2] = float(cy)
dist_coeffs = cv.CreateMat(1, 4, cv.CV_64FC1)
cv.Zero(dist_coeffs)
dist_coeffs[0, 0] = float(k1)
dist_coeffs[0, 1] = float(k2)
dist_coeffs[0, 2] = float(p1)
dist_coeffs[0, 3] = float(p2)
src = cv.LoadImage(src)
dst = cv.CreateImage(cv.GetSize(src), src.depth, src.nChannels)
mapx = cv.CreateImage(cv.GetSize(src), cv.IPL_DEPTH_32F, 1)
mapy = cv.CreateImage(cv.GetSize(src), cv.IPL_DEPTH_32F, 1)
cv.InitUndistortMap(intrinsics, dist_coeffs, mapx, mapy)
cv.Remap(src, dst, mapx, mapy, cv.CV_INTER_LINEAR + cv.CV_WARP_FILL_OUTLIERS, cv.ScalarAll(0))
# cv.Undistort2(src, dst, intrinsics, dist_coeffs)
cv.SaveImage(output, dst)
if __name__ == '__main__':
main(sys.argv)
Also note that OpenCV uses a very different lens distortion model to the one in the web page you linked to.
(Original poster, providing an alternative)
The following function maps destination (rectilinear) coordinates to source (fisheye-distorted) coordinates. (I'd appreciate help in reversing it)
I got to this point through trial-and-error: I don't fundamentally grasp why this code is working, explanations and improved accuracy appreciated!
def dist(x,y):
return sqrt(x*x+y*y)
def correct_fisheye(src_size,dest_size,dx,dy,factor):
""" returns a tuple of source coordinates (sx,sy)
(note: values can be out of range)"""
# convert dx,dy to relative coordinates
rx, ry = dx-(dest_size[0]/2), dy-(dest_size[1]/2)
# calc theta
r = dist(rx,ry)/(dist(src_size[0],src_size[1])/factor)
if 0==r:
theta = 1.0
else:
theta = atan(r)/r
# back to absolute coordinates
sx, sy = (src_size[0]/2)+theta*rx, (src_size[1]/2)+theta*ry
# done
return (int(round(sx)),int(round(sy)))
When used with a factor of 3.0, it successfully undistorts the images used as examples (I made no attempt at quality interpolation):
Dead link
(And this is from the blog post, for comparison:)
If you think your formulas are exact, you can comput an exact formula with trig, like so:
Rin = 2 f sin(w/2) -> sin(w/2)= Rin/2f
Rout= f tan(w) -> tan(w)= Rout/f
(Rin/2f)^2 = [sin(w/2)]^2 = (1 - cos(w))/2 -> cos(w) = 1 - 2(Rin/2f)^2
(Rout/f)^2 = [tan(w)]^2 = 1/[cos(w)]^2 - 1
-> (Rout/f)^2 = 1/(1-2[Rin/2f]^2)^2 - 1
However, as #jmbr says, the actual camera distortion will depend on the lens and the zoom. Rather than rely on a fixed formula, you might want to try a polynomial expansion:
Rout = Rin*(1 + A*Rin^2 + B*Rin^4 + ...)
By tweaking first A, then higher-order coefficients, you can compute any reasonable local function (the form of the expansion takes advantage of the symmetry of the problem). In particular, it should be possible to compute initial coefficients to approximate the theoretical function above.
Also, for good results, you will need to use an interpolation filter to generate your corrected image. As long as the distortion is not too great, you can use the kind of filter you would use to rescale the image linearly without much problem.
Edit: as per your request, the equivalent scaling factor for the above formula:
(Rout/f)^2 = 1/(1-2[Rin/2f]^2)^2 - 1
-> Rout/f = [Rin/f] * sqrt(1-[Rin/f]^2/4)/(1-[Rin/f]^2/2)
If you plot the above formula alongside tan(Rin/f), you can see that they are very similar in shape. Basically, distortion from the tangent becomes severe before sin(w) becomes much different from w.
The inverse formula should be something like:
Rin/f = [Rout/f] / sqrt( sqrt(([Rout/f]^2+1) * (sqrt([Rout/f]^2+1) + 1) / 2 )
I blindly implemented the formulas from here, so I cannot guarantee it would do what you need.
Use auto_zoom to get the value for the zoom parameter.
def dist(x,y):
return sqrt(x*x+y*y)
def fisheye_to_rectilinear(src_size,dest_size,sx,sy,crop_factor,zoom):
""" returns a tuple of dest coordinates (dx,dy)
(note: values can be out of range)
crop_factor is ratio of sphere diameter to diagonal of the source image"""
# convert sx,sy to relative coordinates
rx, ry = sx-(src_size[0]/2), sy-(src_size[1]/2)
r = dist(rx,ry)
# focal distance = radius of the sphere
pi = 3.1415926535
f = dist(src_size[0],src_size[1])*factor/pi
# calc theta 1) linear mapping (older Nikon)
theta = r / f
# calc theta 2) nonlinear mapping
# theta = asin ( r / ( 2 * f ) ) * 2
# calc new radius
nr = tan(theta) * zoom
# back to absolute coordinates
dx, dy = (dest_size[0]/2)+rx/r*nr, (dest_size[1]/2)+ry/r*nr
# done
return (int(round(dx)),int(round(dy)))
def fisheye_auto_zoom(src_size,dest_size,crop_factor):
""" calculate zoom such that left edge of source image matches left edge of dest image """
# Try to see what happens with zoom=1
dx, dy = fisheye_to_rectilinear(src_size, dest_size, 0, src_size[1]/2, crop_factor, 1)
# Calculate zoom so the result is what we wanted
obtained_r = dest_size[0]/2 - dx
required_r = dest_size[0]/2
zoom = required_r / obtained_r
return zoom
I took what JMBR did and basically reversed it. He took the radius of the distorted image (Rd, that is, the distance in pixels from the center of the image) and found a formula for Ru, the radius of the undistorted image.
You want to go the other way. For each pixel in the undistorted (processed image), you want to know what the corresponding pixel is in the distorted image.
In other words, given (xu, yu) --> (xd, yd). You then replace each pixel in the undistorted image with its corresponding pixel from the distorted image.
Starting where JMBR did, I do the reverse, finding Rd as a function of Ru. I get:
Rd = f * sqrt(2) * sqrt( 1 - 1/sqrt(r^2 +1))
where f is the focal length in pixels (I'll explain later), and r = Ru/f.
The focal length for my camera was 2.5 mm. The size of each pixel on my CCD was 6 um square. f was therefore 2500/6 = 417 pixels. This can be found by trial and error.
Finding Rd allows you to find the corresponding pixel in the distorted image using polar coordinates.
The angle of each pixel from the center point is the same:
theta = arctan( (yu-yc)/(xu-xc) ) where xc, yc are the center points.
Then,
xd = Rd * cos(theta) + xc
yd = Rd * sin(theta) + yc
Make sure you know which quadrant you are in.
Here is the C# code I used
public class Analyzer
{
private ArrayList mFisheyeCorrect;
private int mFELimit = 1500;
private double mScaleFESize = 0.9;
public Analyzer()
{
//A lookup table so we don't have to calculate Rdistorted over and over
//The values will be multiplied by focal length in pixels to
//get the Rdistorted
mFisheyeCorrect = new ArrayList(mFELimit);
//i corresponds to Rundist/focalLengthInPixels * 1000 (to get integers)
for (int i = 0; i < mFELimit; i++)
{
double result = Math.Sqrt(1 - 1 / Math.Sqrt(1.0 + (double)i * i / 1000000.0)) * 1.4142136;
mFisheyeCorrect.Add(result);
}
}
public Bitmap RemoveFisheye(ref Bitmap aImage, double aFocalLinPixels)
{
Bitmap correctedImage = new Bitmap(aImage.Width, aImage.Height);
//The center points of the image
double xc = aImage.Width / 2.0;
double yc = aImage.Height / 2.0;
Boolean xpos, ypos;
//Move through the pixels in the corrected image;
//set to corresponding pixels in distorted image
for (int i = 0; i < correctedImage.Width; i++)
{
for (int j = 0; j < correctedImage.Height; j++)
{
//which quadrant are we in?
xpos = i > xc;
ypos = j > yc;
//Find the distance from the center
double xdif = i-xc;
double ydif = j-yc;
//The distance squared
double Rusquare = xdif * xdif + ydif * ydif;
//the angle from the center
double theta = Math.Atan2(ydif, xdif);
//find index for lookup table
int index = (int)(Math.Sqrt(Rusquare) / aFocalLinPixels * 1000);
if (index >= mFELimit) index = mFELimit - 1;
//calculated Rdistorted
double Rd = aFocalLinPixels * (double)mFisheyeCorrect[index]
/mScaleFESize;
//calculate x and y distances
double xdelta = Math.Abs(Rd*Math.Cos(theta));
double ydelta = Math.Abs(Rd * Math.Sin(theta));
//convert to pixel coordinates
int xd = (int)(xc + (xpos ? xdelta : -xdelta));
int yd = (int)(yc + (ypos ? ydelta : -ydelta));
xd = Math.Max(0, Math.Min(xd, aImage.Width-1));
yd = Math.Max(0, Math.Min(yd, aImage.Height-1));
//set the corrected pixel value from the distorted image
correctedImage.SetPixel(i, j, aImage.GetPixel(xd, yd));
}
}
return correctedImage;
}
}
I found this pdf file and I have proved that the maths are correct (except for the line vd = *xd**fv+v0 which should say vd = **yd**+fv+v0).
http://perception.inrialpes.fr/CAVA_Dataset/Site/files/Calibration_OpenCV.pdf
It does not use all of the latest co-efficients that OpenCV has available but I am sure that it could be adapted fairly easily.
double k1 = cameraIntrinsic.distortion[0];
double k2 = cameraIntrinsic.distortion[1];
double p1 = cameraIntrinsic.distortion[2];
double p2 = cameraIntrinsic.distortion[3];
double k3 = cameraIntrinsic.distortion[4];
double fu = cameraIntrinsic.focalLength[0];
double fv = cameraIntrinsic.focalLength[1];
double u0 = cameraIntrinsic.principalPoint[0];
double v0 = cameraIntrinsic.principalPoint[1];
double u, v;
u = thisPoint->x; // the undistorted point
v = thisPoint->y;
double x = ( u - u0 )/fu;
double y = ( v - v0 )/fv;
double r2 = (x*x) + (y*y);
double r4 = r2*r2;
double cDist = 1 + (k1*r2) + (k2*r4);
double xr = x*cDist;
double yr = y*cDist;
double a1 = 2*x*y;
double a2 = r2 + (2*(x*x));
double a3 = r2 + (2*(y*y));
double dx = (a1*p1) + (a2*p2);
double dy = (a3*p1) + (a1*p2);
double xd = xr + dx;
double yd = yr + dy;
double ud = (xd*fu) + u0;
double vd = (yd*fv) + v0;
thisPoint->x = ud; // the distorted point
thisPoint->y = vd;
This can be solved as an optimization problem. Simply draw on curves in images that are supposed to be straight lines. Store the contour points for each of those curves. Now we can solve the fish eye matrix as a minimization problem. Minimize the curve in points and that will give us a fisheye matrix. It works.
It can be done manually by adjusting the fish eye matrix using trackbars! Here is a fish eye GUI code using OpenCV for manual calibration.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
android color between two colors, based on percentage?
How to find all the colors between two colors?
At the beginning, we have two colors in RGB, and a number, for the intermediate colors between them. Method must return an array with required colors. Strongly need help with an algorithm.
Suppose we have 2 Colors (R1,G1,B1) (R2,G2,B2) and N number of intermediate colors:
for i from 1 to N:
Ri = R1 + (R2-R1) * i / N
Bi = B1 + (B2-B1) * i / N
Gi = G1 + (G2-G1) * i / N
AddToArray(Ri,Gi,Bi)
Is that what you are looking for?
PS: I would recommend using the HSL color space instead of the RGB if you want to have a more natural color gradient.
Let your current cR, cG and cB value be 0%, and let the R, G, and B values be 100%, then you just have to iterate i = 1 to 100 with each iteration adding cRGB + i * (RGB - cRGB). You don't have to use 100 intermediate colors, you can use N of them.
function(currentColor, desiredColor, N) {
var colors = [],
cR = currentColor.R,
cG = currentColor.G,
cB = currentColor.B,
dR = desiredColor.R - cR,
dG = desiredColor.G - cG,
dB = desiredColor.B - cB;
for(var i = 1; i <= N; i++) {
colors.push(new Color(cR + i * dR / N, cG + i * dG / N, cB + i * dB / N));
}
return colors;
}
However, that won't give you very good intermediate colors. The first thing you should do is convert your colors into HSV or similar colorspace where intensity is separate from hue and saturation. That will give you much better intermediate colors. http://en.wikipedia.org/wiki/HSL_and_HSV
To do that, first convert your colors to HSV, and run the same algorithm as above, but with H S and V instead of RGB, but keep in mind that S and V have a min of 0 and max of 1, while H is represented in degrees between 0 and 360. You might have to do something with H if you want it to go from the current color to destination color as quickly as possible e.g. if cH = 10 and dH = 50, then going from 10 -> 50 is shortest, but if cH = 10 and dH = 350, then going from 10 -> -10 (same as 350 degrees) is shorter.
I'm looking for some kind of formula or algorithm to determine the brightness of a color given the RGB values. I know it can't be as simple as adding the RGB values together and having higher sums be brighter, but I'm kind of at a loss as to where to start.
The method could vary depending on your needs. Here are 3 ways to calculate Luminance:
Luminance (standard for certain colour spaces): (0.2126*R + 0.7152*G + 0.0722*B) source
Luminance (perceived option 1): (0.299*R + 0.587*G + 0.114*B) source
Luminance (perceived option 2, slower to calculate): sqrt( 0.241*R^2 + 0.691*G^2 + 0.068*B^2 ) → sqrt( 0.299*R^2 + 0.587*G^2 + 0.114*B^2 ) (thanks to #MatthewHerbst) source
[Edit: added examples using named css colors sorted with each method.]
I think what you are looking for is the RGB -> Luma conversion formula.
Photometric/digital ITU BT.709:
Y = 0.2126 R + 0.7152 G + 0.0722 B
Digital ITU BT.601 (gives more weight to the R and B components):
Y = 0.299 R + 0.587 G + 0.114 B
If you are willing to trade accuracy for perfomance, there are two approximation formulas for this one:
Y = 0.33 R + 0.5 G + 0.16 B
Y = 0.375 R + 0.5 G + 0.125 B
These can be calculated quickly as
Y = (R+R+B+G+G+G)/6
Y = (R+R+R+B+G+G+G+G)>>3
The "Accepted" Answer is Incorrect and Incomplete
The only answers that are accurate are the #jive-dadson and #EddingtonsMonkey answers, and in support #nils-pipenbrinck. The other answers (including the accepted) are linking to or citing sources that are either wrong, irrelevant, obsolete, or broken.
Briefly:
sRGB must be LINEARIZED before applying the coefficients.
Luminance (L or Y) is linear as is light.
Perceived lightness (L*) is nonlinear as is human perception.
HSV and HSL are not even remotely accurate in terms of perception.
The IEC standard for sRGB specifies a threshold of 0.04045 it is NOT 0.03928 (that was from an obsolete early draft).
To be useful (i.e. relative to perception), Euclidian distances require a perceptually uniform Cartesian vector space such as CIELAB. sRGB is not one.
What follows is a correct and complete answer:
Because this thread appears highly in search engines, I am adding this answer to clarify the various misconceptions on the subject.
Luminance is a linear measure of light, spectrally weighted for normal vision but not adjusted for the non-linear perception of lightness. It can be a relative measure, Y as in CIEXYZ, or as L, an absolute measure in cd/m2 (not to be confused with L*).
Perceived lightness is used by some vision models such as CIELAB, here L* (Lstar) is a value of perceptual lightness, and is non-linear to approximate the human vision non-linear response curve. (That is, linear to perception but therefore non linear to light).
Brightness is a perceptual attribute, it does not have a "physical" measure. However some color appearance models do have a value, usualy denoted as "Q" for perceived brightness, which is different than perceived lightness.
Luma (Y´ prime) is a gamma encoded, weighted signal used in some video encodings (Y´I´Q´). It is not to be confused with linear luminance.
Gamma or transfer curve (TRC) is a curve that is often similar to the perceptual curve, and is commonly applied to image data for storage or broadcast to reduce perceived noise and/or improve data utilization (and related reasons).
To determine perceived lightness, first convert gamma encoded R´G´B´ image values to linear luminance (L or Y ) and then to non-linear perceived lightness (L*)
TO FIND LUMINANCE:
...Because apparently it was lost somewhere...
Step One:
Convert all sRGB 8 bit integer values to decimal 0.0-1.0
vR = sR / 255;
vG = sG / 255;
vB = sB / 255;
Step Two:
Convert a gamma encoded RGB to a linear value. sRGB (computer standard) for instance requires a power curve of approximately V^2.2, though the "accurate" transform is:
Where V´ is the gamma-encoded R, G, or B channel of sRGB.
Pseudocode:
function sRGBtoLin(colorChannel) {
// Send this function a decimal sRGB gamma encoded color value
// between 0.0 and 1.0, and it returns a linearized value.
if ( colorChannel <= 0.04045 ) {
return colorChannel / 12.92;
} else {
return pow((( colorChannel + 0.055)/1.055),2.4);
}
}
Step Three:
To find Luminance (Y) apply the standard coefficients for sRGB:
Pseudocode using above functions:
Y = (0.2126 * sRGBtoLin(vR) + 0.7152 * sRGBtoLin(vG) + 0.0722 * sRGBtoLin(vB))
TO FIND PERCEIVED LIGHTNESS:
Step Four:
Take luminance Y from above, and transform to L*
Pseudocode:
function YtoLstar(Y) {
// Send this function a luminance value between 0.0 and 1.0,
// and it returns L* which is "perceptual lightness"
if ( Y <= (216/24389)) { // The CIE standard states 0.008856 but 216/24389 is the intent for 0.008856451679036
return Y * (24389/27); // The CIE standard states 903.3, but 24389/27 is the intent, making 903.296296296296296
} else {
return pow(Y,(1/3)) * 116 - 16;
}
}
L* is a value from 0 (black) to 100 (white) where 50 is the perceptual "middle grey". L* = 50 is the equivalent of Y = 18.4, or in other words an 18% grey card, representing the middle of a photographic exposure (Ansel Adams zone V).
References:
IEC 61966-2-1:1999 Standard
Wikipedia sRGB
Wikipedia CIELAB
Wikipedia CIEXYZ
Charles Poynton's Gamma FAQ
I have made comparison of the three algorithms in the accepted answer. I generated colors in cycle where only about every 400th color was used. Each color is represented by 2x2 pixels, colors are sorted from darkest to lightest (left to right, top to bottom).
1st picture - Luminance (relative)
0.2126 * R + 0.7152 * G + 0.0722 * B
2nd picture - http://www.w3.org/TR/AERT#color-contrast
0.299 * R + 0.587 * G + 0.114 * B
3rd picture - HSP Color Model
sqrt(0.299 * R^2 + 0.587 * G^2 + 0.114 * B^2)
4th picture - WCAG 2.0 SC 1.4.3 relative luminance and contrast ratio formula (see #Synchro's answer here)
Pattern can be sometimes spotted on 1st and 2nd picture depending on the number of colors in one row. I never spotted any pattern on picture from 3rd or 4th algorithm.
If i had to choose i would go with algorithm number 3 since its much easier to implement and its about 33% faster than the 4th.
Below is the only CORRECT algorithm for converting sRGB images, as used in browsers etc., to grayscale.
It is necessary to apply an inverse of the gamma function for the color space before calculating the inner product. Then you apply the gamma function to the reduced value. Failure to incorporate the gamma function can result in errors of up to 20%.
For typical computer stuff, the color space is sRGB. The right numbers for sRGB are approx. 0.21, 0.72, 0.07. Gamma for sRGB is a composite function that approximates exponentiation by 1/(2.2). Here is the whole thing in C++.
// sRGB luminance(Y) values
const double rY = 0.212655;
const double gY = 0.715158;
const double bY = 0.072187;
// Inverse of sRGB "gamma" function. (approx 2.2)
double inv_gam_sRGB(int ic) {
double c = ic/255.0;
if ( c <= 0.04045 )
return c/12.92;
else
return pow(((c+0.055)/(1.055)),2.4);
}
// sRGB "gamma" function (approx 2.2)
int gam_sRGB(double v) {
if(v<=0.0031308)
v *= 12.92;
else
v = 1.055*pow(v,1.0/2.4)-0.055;
return int(v*255+0.5); // This is correct in C++. Other languages may not
// require +0.5
}
// GRAY VALUE ("brightness")
int gray(int r, int g, int b) {
return gam_sRGB(
rY*inv_gam_sRGB(r) +
gY*inv_gam_sRGB(g) +
bY*inv_gam_sRGB(b)
);
}
Rather than getting lost amongst the random selection of formulae mentioned here, I suggest you go for the formula recommended by W3C standards.
Here's a straightforward but exact PHP implementation of the WCAG 2.0 SC 1.4.3 relative luminance and contrast ratio formulae. It produces values that are appropriate for evaluating the ratios required for WCAG compliance, as on this page, and as such is suitable and appropriate for any web app. This is trivial to port to other languages.
/**
* Calculate relative luminance in sRGB colour space for use in WCAG 2.0 compliance
* #link http://www.w3.org/TR/WCAG20/#relativeluminancedef
* #param string $col A 3 or 6-digit hex colour string
* #return float
* #author Marcus Bointon <marcus#synchromedia.co.uk>
*/
function relativeluminance($col) {
//Remove any leading #
$col = trim($col, '#');
//Convert 3-digit to 6-digit
if (strlen($col) == 3) {
$col = $col[0] . $col[0] . $col[1] . $col[1] . $col[2] . $col[2];
}
//Convert hex to 0-1 scale
$components = array(
'r' => hexdec(substr($col, 0, 2)) / 255,
'g' => hexdec(substr($col, 2, 2)) / 255,
'b' => hexdec(substr($col, 4, 2)) / 255
);
//Correct for sRGB
foreach($components as $c => $v) {
if ($v <= 0.04045) {
$components[$c] = $v / 12.92;
} else {
$components[$c] = pow((($v + 0.055) / 1.055), 2.4);
}
}
//Calculate relative luminance using ITU-R BT. 709 coefficients
return ($components['r'] * 0.2126) + ($components['g'] * 0.7152) + ($components['b'] * 0.0722);
}
/**
* Calculate contrast ratio acording to WCAG 2.0 formula
* Will return a value between 1 (no contrast) and 21 (max contrast)
* #link http://www.w3.org/TR/WCAG20/#contrast-ratiodef
* #param string $c1 A 3 or 6-digit hex colour string
* #param string $c2 A 3 or 6-digit hex colour string
* #return float
* #author Marcus Bointon <marcus#synchromedia.co.uk>
*/
function contrastratio($c1, $c2) {
$y1 = relativeluminance($c1);
$y2 = relativeluminance($c2);
//Arrange so $y1 is lightest
if ($y1 < $y2) {
$y3 = $y1;
$y1 = $y2;
$y2 = $y3;
}
return ($y1 + 0.05) / ($y2 + 0.05);
}
To add what all the others said:
All these equations work kinda well in practice, but if you need to be very precise you have to first convert the color to linear color space (apply inverse image-gamma), do the weight average of the primary colors and - if you want to display the color -
take the luminance back into the monitor gamma.
The luminance difference between ingnoring gamma and doing proper gamma is up to 20% in the dark grays.
Interestingly, this formulation for RGB=>HSV just uses v=MAX3(r,g,b). In other words, you can use the maximum of (r,g,b) as the V in HSV.
I checked and on page 575 of Hearn & Baker this is how they compute "Value" as well.
I was solving a similar task today in javascript.
I've settled on this getPerceivedLightness(rgb) function for a HEX RGB color.
It deals with Helmholtz-Kohlrausch effect via Fairchild and Perrotta formula for luminance correction.
/**
* Converts RGB color to CIE 1931 XYZ color space.
* https://www.image-engineering.de/library/technotes/958-how-to-convert-between-srgb-and-ciexyz
* #param {string} hex
* #return {number[]}
*/
export function rgbToXyz(hex) {
const [r, g, b] = hexToRgb(hex).map(_ => _ / 255).map(sRGBtoLinearRGB)
const X = 0.4124 * r + 0.3576 * g + 0.1805 * b
const Y = 0.2126 * r + 0.7152 * g + 0.0722 * b
const Z = 0.0193 * r + 0.1192 * g + 0.9505 * b
// For some reason, X, Y and Z are multiplied by 100.
return [X, Y, Z].map(_ => _ * 100)
}
/**
* Undoes gamma-correction from an RGB-encoded color.
* https://en.wikipedia.org/wiki/SRGB#Specification_of_the_transformation
* https://stackoverflow.com/questions/596216/formula-to-determine-brightness-of-rgb-color
* #param {number}
* #return {number}
*/
function sRGBtoLinearRGB(color) {
// Send this function a decimal sRGB gamma encoded color value
// between 0.0 and 1.0, and it returns a linearized value.
if (color <= 0.04045) {
return color / 12.92
} else {
return Math.pow((color + 0.055) / 1.055, 2.4)
}
}
/**
* Converts hex color to RGB.
* https://stackoverflow.com/questions/5623838/rgb-to-hex-and-hex-to-rgb
* #param {string} hex
* #return {number[]} [rgb]
*/
function hexToRgb(hex) {
const match = /^#?([a-f\d]{2})([a-f\d]{2})([a-f\d]{2})$/i.exec(hex)
if (match) {
match.shift()
return match.map(_ => parseInt(_, 16))
}
}
/**
* Converts CIE 1931 XYZ colors to CIE L*a*b*.
* The conversion formula comes from <http://www.easyrgb.com/en/math.php>.
* https://github.com/cangoektas/xyz-to-lab/blob/master/src/index.js
* #param {number[]} color The CIE 1931 XYZ color to convert which refers to
* the D65/2° standard illuminant.
* #returns {number[]} The color in the CIE L*a*b* color space.
*/
// X, Y, Z of a "D65" light source.
// "D65" is a standard 6500K Daylight light source.
// https://en.wikipedia.org/wiki/Illuminant_D65
const D65 = [95.047, 100, 108.883]
export function xyzToLab([x, y, z]) {
[x, y, z] = [x, y, z].map((v, i) => {
v = v / D65[i]
return v > 0.008856 ? Math.pow(v, 1 / 3) : v * 7.787 + 16 / 116
})
const l = 116 * y - 16
const a = 500 * (x - y)
const b = 200 * (y - z)
return [l, a, b]
}
/**
* Converts Lab color space to Luminance-Chroma-Hue color space.
* http://www.brucelindbloom.com/index.html?Eqn_Lab_to_LCH.html
* #param {number[]}
* #return {number[]}
*/
export function labToLch([l, a, b]) {
const c = Math.sqrt(a * a + b * b)
const h = abToHue(a, b)
return [l, c, h]
}
/**
* Converts a and b of Lab color space to Hue of LCH color space.
* https://stackoverflow.com/questions/53733379/conversion-of-cielab-to-cielchab-not-yielding-correct-result
* #param {number} a
* #param {number} b
* #return {number}
*/
function abToHue(a, b) {
if (a >= 0 && b === 0) {
return 0
}
if (a < 0 && b === 0) {
return 180
}
if (a === 0 && b > 0) {
return 90
}
if (a === 0 && b < 0) {
return 270
}
let xBias
if (a > 0 && b > 0) {
xBias = 0
} else if (a < 0) {
xBias = 180
} else if (a > 0 && b < 0) {
xBias = 360
}
return radiansToDegrees(Math.atan(b / a)) + xBias
}
function radiansToDegrees(radians) {
return radians * (180 / Math.PI)
}
function degreesToRadians(degrees) {
return degrees * Math.PI / 180
}
/**
* Saturated colors appear brighter to human eye.
* That's called Helmholtz-Kohlrausch effect.
* Fairchild and Pirrotta came up with a formula to
* calculate a correction for that effect.
* "Color Quality of Semiconductor and Conventional Light Sources":
* https://books.google.ru/books?id=ptDJDQAAQBAJ&pg=PA45&lpg=PA45&dq=fairchild+pirrotta+correction&source=bl&ots=7gXR2MGJs7&sig=ACfU3U3uIHo0ZUdZB_Cz9F9NldKzBix0oQ&hl=ru&sa=X&ved=2ahUKEwi47LGivOvmAhUHEpoKHU_ICkIQ6AEwAXoECAkQAQ#v=onepage&q=fairchild%20pirrotta%20correction&f=false
* #return {number}
*/
function getLightnessUsingFairchildPirrottaCorrection([l, c, h]) {
const l_ = 2.5 - 0.025 * l
const g = 0.116 * Math.abs(Math.sin(degreesToRadians((h - 90) / 2))) + 0.085
return l + l_ * g * c
}
export function getPerceivedLightness(hex) {
return getLightnessUsingFairchildPirrottaCorrection(labToLch(xyzToLab(rgbToXyz(hex))))
}
Consider this an addendum to Myndex's excellent answer. As he (and others) explain, the algorithms for calculating the relative luminance (and the perceptual lightness) of an RGB colour are designed to work with linear RGB values. You can't just apply them to raw sRGB values and hope to get the same results.
Well that all sounds great in theory, but I really needed to see the evidence for myself, so, inspired by Petr Hurtak's colour gradients, I went ahead and made my own. They illustrate the two most common algorithms (ITU-R Recommendation BT.601 and BT.709), and clearly demonstrate why you should do your calculations with linear values (not gamma-corrected ones).
Firstly, here are the results from the older ITU BT.601 algorithm. The one on the left uses raw sRGB values. The one on the right uses linear values.
ITU-R BT.601 colour luminance gradients
0.299 R + 0.587 G + 0.114 B
At this resolution, the left one actually looks surprisingly good! But if you look closely, you can see a few issues. At a higher resolution, unwanted artefacts are more obvious:
The linear one doesn't suffer from these, but there's quite a lot of noise there. Let's compare it to ITU-R Recommendation BT.709…
ITU-R BT.709 colour luminance gradients
0.2126 R + 0.7152 G + 0.0722 B
Oh boy. Clearly not intended to be used with raw sRGB values! And yet, that's exactly what most people do!
At high-res, you can really see how effective this algorithm is when using linear values. It doesn't have nearly as much noise as the earlier one. While none of these algorithms are perfect, this one is about as good as it gets.
Here's a bit of C code that should properly calculate perceived luminance.
// reverses the rgb gamma
#define inverseGamma(t) (((t) <= 0.0404482362771076) ? ((t)/12.92) : pow(((t) + 0.055)/1.055, 2.4))
//CIE L*a*b* f function (used to convert XYZ to L*a*b*) http://en.wikipedia.org/wiki/Lab_color_space
#define LABF(t) ((t >= 8.85645167903563082e-3) ? powf(t,0.333333333333333) : (841.0/108.0)*(t) + (4.0/29.0))
float
rgbToCIEL(PIXEL p)
{
float y;
float r=p.r/255.0;
float g=p.g/255.0;
float b=p.b/255.0;
r=inverseGamma(r);
g=inverseGamma(g);
b=inverseGamma(b);
//Observer = 2°, Illuminant = D65
y = 0.2125862307855955516*r + 0.7151703037034108499*g + 0.07220049864333622685*b;
// At this point we've done RGBtoXYZ now do XYZ to Lab
// y /= WHITEPOINT_Y; The white point for y in D65 is 1.0
y = LABF(y);
/* This is the "normal conversion which produces values scaled to 100
Lab.L = 116.0*y - 16.0;
*/
return(1.16*y - 0.16); // return values for 0.0 >=L <=1.0
}
RGB Luminance value = 0.3 R + 0.59 G + 0.11 B
http://www.scantips.com/lumin.html
If you're looking for how close to white the color is you can use Euclidean Distance from (255, 255, 255)
I think RGB color space is perceptively non-uniform with respect to the L2 euclidian distance.
Uniform spaces include CIE LAB and LUV.
The inverse-gamma formula by Jive Dadson needs to have the half-adjust removed when implemented in Javascript, i.e. the return from function gam_sRGB needs to be return int(v*255); not return int(v*255+.5); Half-adjust rounds up, and this can cause a value one too high on a R=G=B i.e. grey colour triad. Greyscale conversion on a R=G=B triad should produce a value equal to R; it's one proof that the formula is valid.
See Nine Shades of Greyscale for the formula in action (without the half-adjust).
I wonder how those rgb coefficients were determined. I did an experiment myself and I ended up with the following:
Y = 0.267 R + 0.642 G + 0.091 B
Close but but obviously different than the long established ITU coefficients. I wonder if those coefficients could be different for each and every observer, because we all may have a different amount of cones and rods on the retina in our eyes, and especially the ratio between the different types of cones may differ.
For reference:
ITU BT.709:
Y = 0.2126 R + 0.7152 G + 0.0722 B
ITU BT.601:
Y = 0.299 R + 0.587 G + 0.114 B
I did the test by quickly moving a small gray bar on a bright red, bright green and bright blue background, and adjusting the gray until it blended in just as much as possible. I also repeated that test with other shades. I repeated the test on different displays, even one with a fixed gamma factor of 3.0, but it all looks the same to me. More over, the ITU coefficients literally are wrong for my eyes.
And yes, I presumably have a normal color vision.
As mentioned by #Nils Pipenbrinck:
All these equations work kinda well in practice, but if you need to be very precise you have to [do some extra gamma stuff]. The luminance difference between ignoring gamma and doing proper gamma is up to 20% in the dark grays.
Here's a fully self-contained JavaScript function that does the "extra" stuff to get that extra accuracy. It's based on Jive Dadson's C++ answer to this same question.
// Returns greyscale "brightness" (0-1) of the given 0-255 RGB values
// Based on this C++ implementation: https://stackoverflow.com/a/13558570/11950764
function rgbBrightness(r, g, b) {
let v = 0;
v += 0.212655 * ((r/255) <= 0.04045 ? (r/255)/12.92 : Math.pow(((r/255)+0.055)/1.055, 2.4));
v += 0.715158 * ((g/255) <= 0.04045 ? (g/255)/12.92 : Math.pow(((g/255)+0.055)/1.055, 2.4));
v += 0.072187 * ((b/255) <= 0.04045 ? (b/255)/12.92 : Math.pow(((b/255)+0.055)/1.055, 2.4));
return v <= 0.0031308 ? v*12.92 : 1.055 * Math.pow(v,1.0/2.4) - 0.055;
}
See Myndex's answer for a more accurate calculation.
The HSV colorspace should do the trick, see the wikipedia article depending on the language you're working in you may get a library conversion .
H is hue which is a numerical value for the color (i.e. red, green...)
S is the saturation of the color, i.e. how 'intense' it is
V is the 'brightness' of the color.
The 'V' of HSV is probably what you're looking for. MATLAB has an rgb2hsv function and the previously cited wikipedia article is full of pseudocode. If an RGB2HSV conversion is not feasible, a less accurate model would be the grayscale version of the image.
To determine the brightness of a color with R, I convert the RGB system color in HSV system color.
In my script, I use the HEX system code before for other reason, but you can start also with RGB system code with rgb2hsv {grDevices}. The documentation is here.
Here is this part of my code:
sample <- c("#010101", "#303030", "#A6A4A4", "#020202", "#010100")
hsvc <-rgb2hsv(col2rgb(sample)) # convert HEX to HSV
value <- as.data.frame(hsvc) # create data.frame
value <- value[3,] # extract the information of brightness
order(value) # ordrer the color by brightness
My SIMPLE conclusion based on all these answers, for most practical use cases, you only need:
brightness = 0.2*r + 0.7*g + 0.1*b
When r,g,b values are between 0 to 255, then the brightness scale is also between 0 (=black) to 255 (=white).
It can be fine tuned, but, there's usually no need.
For clarity, the formulas that use a square root need to be
sqrt(coefficient * (colour_value^2))
not
sqrt((coefficient * colour_value))^2
The proof of this lies in the conversion of a R=G=B triad to greyscale R. That will only be true if you square the colour value, not the colour value times coefficient. See Nine Shades of Greyscale
Please define brightness. If you're looking for how close to white the color is you can use Euclidean Distance from (255, 255, 255)
I'm currently writing a program to generate really enormous (65536x65536 pixels and above) Mandelbrot images, and I'd like to devise a spectrum and coloring scheme that does them justice. The wikipedia featured mandelbrot image seems like an excellent example, especially how the palette remains varied at all zoom levels of the sequence. I'm not sure if it's rotating the palette or doing some other trick to achieve this, though.
I'm familiar with the smooth coloring algorithm for the mandelbrot set, so I can avoid banding, but I still need a way to assign colors to output values from this algorithm.
The images I'm generating are pyramidal (eg, a series of images, each of which has half the dimensions of the previous one), so I can use a rotating palette of some sort, as long as the change in the palette between subsequent zoom levels isn't too obvious.
This is the smooth color algorithm:
Lets say you start with the complex number z0 and iterate n times until it escapes. Let the end point be zn.
A smooth value would be
nsmooth := n + 1 - Math.log(Math.log(zn.abs()))/Math.log(2)
This only works for mandelbrot, if you want to compute a smooth function for julia sets, then use
Complex z = new Complex(x,y);
double smoothcolor = Math.exp(-z.abs());
for(i=0;i<max_iter && z.abs() < 30;i++) {
z = f(z);
smoothcolor += Math.exp(-z.abs());
}
Then smoothcolor is in the interval (0,max_iter).
Divide smoothcolor with max_iter to get a value between 0 and 1.
To get a smooth color from the value:
This can be called, for example (in Java):
Color.HSBtoRGB(0.95f + 10 * smoothcolor ,0.6f,1.0f);
since the first value in HSB color parameters is used to define the color from the color circle.
Use the smooth coloring algorithm to calculate all of the values within the viewport, then map your palette from the lowest to highest value. Thus, as you zoom in and the higher values are no longer visible, the palette will scale down as well. With the same constants for n and B you will end up with a range of 0.0 to 1.0 for a fully zoomed out set, but at deeper zooms the dynamic range will shrink, to say 0.0 to 0.1 at 200% zoom, 0.0 to 0.0001 at 20000% zoom, etc.
Here is a typical inner loop for a naive Mandelbrot generator. To get a smooth colour you want to pass in the real and complex "lengths" and the iteration you bailed out at. I've included the Mandelbrot code so you can see which vars to use to calculate the colour.
for (ix = 0; ix < panelMain.Width; ix++)
{
cx = cxMin + (double )ix * pixelWidth;
// init this go
zx = 0.0;
zy = 0.0;
zx2 = 0.0;
zy2 = 0.0;
for (i = 0; i < iterationMax && ((zx2 + zy2) < er2); i++)
{
zy = zx * zy * 2.0 + cy;
zx = zx2 - zy2 + cx;
zx2 = zx * zx;
zy2 = zy * zy;
}
if (i == iterationMax)
{
// interior, part of set, black
// set colour to black
g.FillRectangle(sbBlack, ix, iy, 1, 1);
}
else
{
// outside, set colour proportional to time/distance it took to converge
// set colour not black
SolidBrush sbNeato = new SolidBrush(MapColor(i, zx2, zy2));
g.FillRectangle(sbNeato, ix, iy, 1, 1);
}
and MapColor below: (see this link to get the ColorFromHSV function)
private Color MapColor(int i, double r, double c)
{
double di=(double )i;
double zn;
double hue;
zn = Math.Sqrt(r + c);
hue = di + 1.0 - Math.Log(Math.Log(Math.Abs(zn))) / Math.Log(2.0); // 2 is escape radius
hue = 0.95 + 20.0 * hue; // adjust to make it prettier
// the hsv function expects values from 0 to 360
while (hue > 360.0)
hue -= 360.0;
while (hue < 0.0)
hue += 360.0;
return ColorFromHSV(hue, 0.8, 1.0);
}
MapColour is "smoothing" the bailout values from 0 to 1 which then can be used to map a colour without horrible banding. Playing with MapColour and/or the hsv function lets you alter what colours are used.
Seems simple to do by trial and error. Assume you can define HSV1 and HSV2 (hue, saturation, value) of the endpoint colors you wish to use (black and white; blue and yellow; dark red and light green; etc.), and assume you have an algorithm to assign a value P between 0.0 and 1.0 to each of your pixels. Then that pixel's color becomes
(H2 - H1) * P + H1 = HP
(S2 - S1) * P + S1 = SP
(V2 - V1) * P + V1 = VP
With that done, just observe the results and see how you like them. If the algorithm to assign P is continuous, then the gradient should be smooth as well.
My eventual solution was to create a nice looking (and fairly large) palette and store it as a constant array in the source, then interpolate between indexes in it using the smooth coloring algorithm. The palette wraps (and is designed to be continuous), but this doesn't appear to matter much.
What's going on with the color mapping in that image is that it's using a 'log transfer function' on the index (according to documentation). How exactly it's doing it I still haven't figured out yet. The program that produced it uses a palette of 400 colors, so index ranges [0,399), wrapping around if needed. I've managed to get pretty close to matching it's behavior. I use an index range of [0,1) and map it like so:
double value = Math.log(0.021 * (iteration + delta + 60)) + 0.72;
value = value - Math.floor(value);
It's kind of odd that I have to use these special constants in there to get my results to match, since I doubt they do any of that. But whatever works in the end, right?
here you can find a version with javascript
usage :
var rgbcol = [] ;
var rgbcol = MapColor ( Iteration , Zy2,Zx2 ) ;
point ( ctx , iX, iY ,rgbcol[0],rgbcol[1],rgbcol[2] );
function
/*
* The Mandelbrot Set, in HTML5 canvas and javascript.
* https://github.com/cslarsen/mandelbrot-js
*
* Copyright (C) 2012 Christian Stigen Larsen
*/
/*
* Convert hue-saturation-value/luminosity to RGB.
*
* Input ranges:
* H = [0, 360] (integer degrees)
* S = [0.0, 1.0] (float)
* V = [0.0, 1.0] (float)
*/
function hsv_to_rgb(h, s, v)
{
if ( v > 1.0 ) v = 1.0;
var hp = h/60.0;
var c = v * s;
var x = c*(1 - Math.abs((hp % 2) - 1));
var rgb = [0,0,0];
if ( 0<=hp && hp<1 ) rgb = [c, x, 0];
if ( 1<=hp && hp<2 ) rgb = [x, c, 0];
if ( 2<=hp && hp<3 ) rgb = [0, c, x];
if ( 3<=hp && hp<4 ) rgb = [0, x, c];
if ( 4<=hp && hp<5 ) rgb = [x, 0, c];
if ( 5<=hp && hp<6 ) rgb = [c, 0, x];
var m = v - c;
rgb[0] += m;
rgb[1] += m;
rgb[2] += m;
rgb[0] *= 255;
rgb[1] *= 255;
rgb[2] *= 255;
rgb[0] = parseInt ( rgb[0] );
rgb[1] = parseInt ( rgb[1] );
rgb[2] = parseInt ( rgb[2] );
return rgb;
}
// http://stackoverflow.com/questions/369438/smooth-spectrum-for-mandelbrot-set-rendering
// alex russel : http://stackoverflow.com/users/2146829/alex-russell
function MapColor(i,r,c)
{
var di= i;
var zn;
var hue;
zn = Math.sqrt(r + c);
hue = di + 1.0 - Math.log(Math.log(Math.abs(zn))) / Math.log(2.0); // 2 is escape radius
hue = 0.95 + 20.0 * hue; // adjust to make it prettier
// the hsv function expects values from 0 to 360
while (hue > 360.0)
hue -= 360.0;
while (hue < 0.0)
hue += 360.0;
return hsv_to_rgb(hue, 0.8, 1.0);
}
Motivation
I'd like to find a way to take an arbitrary color and lighten it a few shades, so that I can programatically create a nice gradient from the one color to a lighter version. The gradient will be used as a background in a UI.
Possibility 1
Obviously I can just split out the RGB values and increase them individually by a certain amount. Is this actually what I want?
Possibility 2
My second thought was to convert the RGB to HSV/HSB/HSL (Hue, Saturation, Value/Brightness/Lightness), increase the brightness a bit, decrease the saturation a bit, and then convert it back to RGB. Will this have the desired effect in general?
As Wedge said, you want to multiply to make things brighter, but that only works until one of the colors becomes saturated (i.e. hits 255 or greater). At that point, you can just clamp the values to 255, but you'll be subtly changing the hue as you get lighter. To keep the hue, you want to maintain the ratio of (middle-lowest)/(highest-lowest).
Here are two functions in Python. The first implements the naive approach which just clamps the RGB values to 255 if they go over. The second redistributes the excess values to keep the hue intact.
def clamp_rgb(r, g, b):
return min(255, int(r)), min(255, int(g)), min(255, int(b))
def redistribute_rgb(r, g, b):
threshold = 255.999
m = max(r, g, b)
if m <= threshold:
return int(r), int(g), int(b)
total = r + g + b
if total >= 3 * threshold:
return int(threshold), int(threshold), int(threshold)
x = (3 * threshold - total) / (3 * m - total)
gray = threshold - x * m
return int(gray + x * r), int(gray + x * g), int(gray + x * b)
I created a gradient starting with the RGB value (224,128,0) and multiplying it by 1.0, 1.1, 1.2, etc. up to 2.0. The upper half is the result using clamp_rgb and the bottom half is the result with redistribute_rgb. I think it's easy to see that redistributing the overflows gives a much better result, without having to leave the RGB color space.
For comparison, here's the same gradient in the HLS and HSV color spaces, as implemented by Python's colorsys module. Only the L component was modified, and clamping was performed on the resulting RGB values. The results are similar, but require color space conversions for every pixel.
I would go for the second option. Generally speaking the RGB space is not really good for doing color manipulation (creating transition from one color to an other, lightening / darkening a color, etc). Below are two sites I've found with a quick search to convert from/to RGB to/from HSL:
from the "Fundamentals of Computer Graphics"
some sourcecode in C# - should be easy to adapt to other programming languages.
In C#:
public static Color Lighten(Color inColor, double inAmount)
{
return Color.FromArgb(
inColor.A,
(int) Math.Min(255, inColor.R + 255 * inAmount),
(int) Math.Min(255, inColor.G + 255 * inAmount),
(int) Math.Min(255, inColor.B + 255 * inAmount) );
}
I've used this all over the place.
ControlPaint class in System.Windows.Forms namespace has static methods Light and Dark:
public static Color Dark(Color baseColor, float percOfDarkDark);
These methods use private implementation of HLSColor. I wish this struct was public and in System.Drawing.
Alternatively, you can use GetHue, GetSaturation, GetBrightness on Color struct to get HSB components. Unfortunately, I didn't find the reverse conversion.
Convert it to RGB and linearly interpolate between the original color and the target color (often white). So, if you want 16 shades between two colors, you do:
for(i = 0; i < 16; i++)
{
colors[i].R = start.R + (i * (end.R - start.R)) / 15;
colors[i].G = start.G + (i * (end.G - start.G)) / 15;
colors[i].B = start.B + (i * (end.B - start.B)) / 15;
}
In order to get a lighter or a darker version of a given color you should modify its brightness. You can do this easily even without converting your color to HSL or HSB color. For example to make a color lighter you can use the following code:
float correctionFactor = 0.5f;
float red = (255 - color.R) * correctionFactor + color.R;
float green = (255 - color.G) * correctionFactor + color.G;
float blue = (255 - color.B) * correctionFactor + color.B;
Color lighterColor = Color.FromArgb(color.A, (int)red, (int)green, (int)blue);
If you need more details, read the full story on my blog.
Converting to HS(LVB), increasing the brightness and then converting back to RGB is the only way to reliably lighten the colour without effecting the hue and saturation values (ie to only lighten the colour without changing it in any other way).
A very similar question, with useful answers, was asked previously:
How do I determine darker or lighter color variant of a given color?
Short answer: multiply the RGB values by a constant if you just need "good enough", translate to HSV if you require accuracy.
I used Andrew's answer and Mark's answer to make this (as of 1/2013 no range input for ff).
function calcLightness(l, r, g, b) {
var tmp_r = r;
var tmp_g = g;
var tmp_b = b;
tmp_r = (255 - r) * l + r;
tmp_g = (255 - g) * l + g;
tmp_b = (255 - b) * l + b;
if (tmp_r > 255 || tmp_g > 255 || tmp_b > 255)
return { r: r, g: g, b: b };
else
return { r:parseInt(tmp_r), g:parseInt(tmp_g), b:parseInt(tmp_b) }
}
I've done this both ways -- you get much better results with Possibility 2.
Any simple algorithm you construct for Possibility 1 will probably work well only for a limited range of starting saturations.
You would want to look into Poss 1 if (1) you can restrict the colors and brightnesses used, and (2) you are performing the calculation a lot in a rendering.
Generating the background for a UI won't need very many shading calculations, so I suggest Poss 2.
-Al.
IF you want to produce a gradient fade-out, I would suggest the following optimization: Rather than doing RGB->HSB->RGB for each individual color you should only calculate the target color. Once you know the target RGB, you can simply calculate the intermediate values in RGB space without having to convert back and forth. Whether you calculate a linear transition of use some sort of curve is up to you.
Method 1: Convert RGB to HSL, adjust HSL, convert back to RGB.
Method 2: Lerp the RGB colour values - http://en.wikipedia.org/wiki/Lerp_(computing)
See my answer to this similar question for a C# implementation of method 2.
Pretend that you alpha blended to white:
oneMinus = 1.0 - amount
r = amount + oneMinus * r
g = amount + oneMinus * g
b = amount + oneMinus * b
where amount is from 0 to 1, with 0 returning the original color and 1 returning white.
You might want to blend with whatever the background color is if you are lightening to display something disabled:
oneMinus = 1.0 - amount
r = amount * dest_r + oneMinus * r
g = amount * dest_g + oneMinus * g
b = amount * dest_b + oneMinus * b
where (dest_r, dest_g, dest_b) is the color being blended to and amount is from 0 to 1, with zero returning (r, g, b) and 1 returning (dest.r, dest.g, dest.b)
I didn't find this question until after it became a related question to my original question.
However, using insight from these great answers. I pieced together a nice two-liner function for this:
Programmatically Lighten or Darken a hex color (or rgb, and blend colors)
Its a version of method 1. But with over saturation taken into account. Like Keith said in his answer above; use Lerp to seemly solve the same problem Mark mentioned, but without redistribution. The results of shadeColor2 should be much closer to doing it the right way with HSL, but without the overhead.
A bit late to the party, but if you use javascript or nodejs, you can use tinycolor library, and manipulate the color the way you want:
tinycolor("red").lighten().desaturate().toHexString() // "#f53d3d"
I would have tried number #1 first, but #2 sounds pretty good. Try doing it yourself and see if you're satisfied with the results, it sounds like it'll take you maybe 10 minutes to whip up a test.
Technically, I don't think either is correct, but I believe you want a variant of option #2. The problem being that taken RGB 990000 and "lightening" it would really just add onto the Red channel (Value, Brightness, Lightness) until you got to FF. After that (solid red), it would be taking down the saturation to go all the way to solid white.
The conversions get annoying, especially since you can't go direct to and from RGB and Lab, but I think you really want to separate the chrominance and luminence values, and just modify the luminence to really achieve what you want.
Here's an example of lightening an RGB colour in Python:
def lighten(hex, amount):
""" Lighten an RGB color by an amount (between 0 and 1),
e.g. lighten('#4290e5', .5) = #C1FFFF
"""
hex = hex.replace('#','')
red = min(255, int(hex[0:2], 16) + 255 * amount)
green = min(255, int(hex[2:4], 16) + 255 * amount)
blue = min(255, int(hex[4:6], 16) + 255 * amount)
return "#%X%X%X" % (int(red), int(green), int(blue))
This is based on Mark Ransom's answer.
Where the clampRGB function tries to maintain the hue, it however miscalculates the scaling to keep the same luminance. This is because the calculation directly uses sRGB values which are not linear.
Here's a Java version that does the same as clampRGB (although with values ranging from 0 to 1) that maintains luminance as well:
private static Color convertToDesiredLuminance(Color input, double desiredLuminance) {
if(desiredLuminance > 1.0) {
return Color.WHITE;
}
if(desiredLuminance < 0.0) {
return Color.BLACK;
}
double ratio = desiredLuminance / luminance(input);
double r = Double.isInfinite(ratio) ? desiredLuminance : toLinear(input.getRed()) * ratio;
double g = Double.isInfinite(ratio) ? desiredLuminance : toLinear(input.getGreen()) * ratio;
double b = Double.isInfinite(ratio) ? desiredLuminance : toLinear(input.getBlue()) * ratio;
if(r > 1.0 || g > 1.0 || b > 1.0) { // anything outside range?
double br = Math.min(r, 1.0); // base values
double bg = Math.min(g, 1.0);
double bb = Math.min(b, 1.0);
double rr = 1.0 - br; // ratios between RGB components to maintain
double rg = 1.0 - bg;
double rb = 1.0 - bb;
double x = (desiredLuminance - luminance(br, bg, bb)) / luminance(rr, rg, rb);
r = 0.0001 * Math.round(10000.0 * (br + rr * x));
g = 0.0001 * Math.round(10000.0 * (bg + rg * x));
b = 0.0001 * Math.round(10000.0 * (bb + rb * x));
}
return Color.color(toGamma(r), toGamma(g), toGamma(b));
}
And supporting functions:
private static double toLinear(double v) { // inverse is #toGamma
return v <= 0.04045 ? v / 12.92 : Math.pow((v + 0.055) / 1.055, 2.4);
}
private static double toGamma(double v) { // inverse is #toLinear
return v <= 0.0031308 ? v * 12.92 : 1.055 * Math.pow(v, 1.0 / 2.4) - 0.055;
}
private static double luminance(Color c) {
return luminance(toLinear(c.getRed()), toLinear(c.getGreen()), toLinear(c.getBlue()));
}
private static double luminance(double r, double g, double b) {
return r * 0.2126 + g * 0.7152 + b * 0.0722;
}