I am trying to convert an RGB to the perceptually uniform color space, CIELAB. Wikipedia states:
"The RGB or CMYK values first must be transformed to a specific
absolute color space, such as sRGB or Adobe RGB. This adjustment will
be device-dependent, but the resulting data from the transform will be
device-independent, allowing data to be transformed to the CIE 1931
color space and then transformed into L*a * b*."
I know there are some straightforward transformations once converting to sRGB, but I have not found any material to go from RGB to sRGB. So, what methods exist to do such a conversion?
No, you should not go from (linear) RGB to sRGB. In fact, it is the other way round. Following are the steps:
Convert sRGB into linear RGB. sRGB image is a gamma encoded which means a camera applies gamma function pow(x, 1/2.2) onto the light signal. This sRGB is in gamma-space which is non-linear.
Now, converting linear RGB to LAB involves two steps: first is converting linear RGB to XYZ color space (this is a basic color-space). This conversion is a linear operation, i.e., matrix multiplication. This is the reason why you would need linear RGB values not sRGB. It needs to be in linear space. Finally, XYZ values are converted into LAB values through a non-linear operation which contains some standard formulas (which you don't need to be worried about).
Interesting links:
(i) Understanding sRGB and linear RGB space: http://filmicgames.com/archives/299; http://www.cambridgeincolour.com/tutorials/gamma-correction.htm
(ii) MATLAB tutorial: https://de.mathworks.com/help/vision/ref/colorspaceconversion.html
(iii) Python package: http://pydoc.net/Python/pwkit/0.2.1/pwkit.colormaps/
(iv) C code: http://svn.int64.org/viewvc/int64/colors/color.c?view=markup
(v) OpenCV does not do this sRGB to linear RGB conversion but it does the conversion inside color.cpp code (OpenCV_DIR\modules\imgproc\src\color.cpp). Check out method called initLabTabs(), there is a gamma encoding and decoding. OpenCV color conversion API: http://docs.opencv.org/3.1.0/de/d25/imgproc_color_conversions.html
Related
As per Weber's law, delta(L)/L is a constant where L is luminance measured in candela/m2 )i.e. (L2 - L1)/L1. This implies that a small change in lower luminance range (darker) is perceptually much more than a small change in higher luminance range (brighter).
The sRGB images which we have stored are gamma corrected i.e. they first undergo a non-linear transfer function which also partially simulated human perception.
I would like to know what happens to luminance masking after gamma correction? Does Weber law still hold on these sRGB images or are they perceptually uniform i.e. 1 unit of difference in pixel value is same be it in darker region or in brighter region? In other words, is delta(L) constant in gamma corrected images where L is gamma corrected pixel value.
Weber's Law does not apply to sRGB coded values to the extent it does apply to luminance. In other words, sRGB value is closer to being perceptually uniform than cd/m2.
To answer your question, I would NOT expect delta(sRGB coded pseudo-L) to be (even vaguely) constant.
However, keep in mind that both Weber-Fechner and sRGB are coarse approximations to perception. CIECAM02 is a more modern alternative worth exploring.
There is no bijection between RGB and Parula, discussed here.
I am thinking how to do well the image processing of files in Parula.
This challenge has been developed from this thread about removing black color from ECG images by extending the case to a generalized problem with Parula colors.
Data:
which is generated by
[X,Y,Z] = peaks(25);
imgParula = surf(X,Y,Z);
view(2);
axis off;
It is not the point of this thread to use this code in your solution to read the second image.
Code:
[imgParula, map, alpha] = imread('http://i.stack.imgur.com/tVMO2.png');
where map is [] and alpha is a completely white image. Doing imshow(imgParula) gives
where you see a lot of interference and lost of resolution because Matlab reads images as RGB, although the actual colormap is Parula.
Resizing this picture does not improve resolution.
How can you read image into corresponding colormap in Matlab?
I did not find any parameter to specify the colormap in reading.
The Problem
There is a one-to-one mapping from indexed colors in the parula colormap to RGB triplets. However, no such one-to-one mapping exists to reverse this process to convert a parula indexed color back to RGB (indeed there are an infinite number ways to do so). Thus, there is no one-to-one correspondence or bijection between the two spaces. The plot below, which shows the R, G, and B values for each parula index, makes this clearer.
This is the case for most indexed colors. Any solution to this problem will be non-unique.
A Built-in Solution
I after playing around with this a bit, I realized that there's already a built-in function that may be sufficient: rgb2ind, which converts RGB image data to indexed image data. This function uses dither (which in turn calls the mex function ditherc) to perform the inverse colormap transformation.
Here's a demonstration that uses JPEG compression to add noise and distort the colors in the original parula index data:
img0 = peaks(32); % Generate sample data
img0 = img0-min(img0(:));
img0 = floor(255*img0./max(img0(:))); % Convert to 0-255
fname = [tempname '.jpg']; % Save file in temp directory
map = parula(256); % Parula colormap
imwrite(img0,map,fname,'Quality',50); % Write data to compressed JPEG
img1 = imread(fname); % Read RGB JPEG file data
img2 = rgb2ind(img1,map,'nodither'); % Convert RGB data to parula colormap
figure;
image(img0); % Original indexed data
colormap(map);
axis image;
figure;
image(img1); % RGB JPEG file data
axis image;
figure;
image(img2); % rgb2ind indexed image data
colormap(map);
axis image;
This should produce images similar to the first three below.
Alternative Solution: Color Difference
Another way to accomplish this task is by comparing the difference between the colors in the RGB image with the RGB values that correspond to each colormap index. The standard way to do this is by calculating ΔE in the CIE L*a*b* color space. I've implemented a form of this in a general function called rgb2map that can be downloaded from my GitHub. This code relies on makecform and applycform in the Image Processing Toolbox to convert from RGB to the 1976 CIE L*a*b* color space.
The following code will produce an image like the one on the right above:
img3 = rgb2map(img1,map);
figure;
image(img3); % rgb2map indexed image data
colormap(map);
axis image;
For each RGB pixel in an input image, rgb2map calculates the color difference between it and every RGB triplet in the input colormap using the CIE 1976 standard. The min function is used to find the index of the minimum ΔE (if more than one minimum value exists, the index of the first is returned). More sophisticated means can be used to select the "best" color in the case of multiple ΔE minima, but they will be more costly.
Conclusions
As a final example, I used an image of the namesake Parula bird to compare the two methods in the figure below. The two results are quite different for this image. If you manually adjust rgb2map to use the more complex CIE 1994 color difference standard, you'll get yet another rendering. However, for images that more closely match the original parula colormap (as above) both should return more similar results. Importantly, rgb2ind benefits from calling mex functions and is almost 100 times faster than rgb2map despite several optimizations in my code (if the CIE 1994 standard is used, it's about 700 times faster).
Lastly, those who want to learn more about colormaps in Matlab, should read this four-part MathWorks blog post by Steve Eddins on the new parula colormap.
Update 6-20-2015: rgb2map code described above updated to use different color space transforms, which improves it's speed by almost a factor of two.
Given that in RGB we can represent 256^3 combinations = 16,777,216 colors, and since the human eye can only distinguish roughly 10,000,000, there is obviously a surplus of 6,777,216 RGB combinations that chromatically are indistinguishable from counterpart colors.
Compression algorithms work on this basis when approximating out spacial difference in color ranges across a frame I believe. With that in mind, how can one reliably compute whether a given color is within a range of 'similarity' to another?
Of course, 'similarity' will be some kind of arbitrary/tunable parameter that can be tweaked, but this is an approximation anyway. So any pointers, pseudocode, intuitive code samples, resources out there to help me model such a function?
Many thanks for your help
There are many ways of computing distances between colors, the simplest ones being defined on color components in any color space. These are common "distances" or metrics between RGB colors (r1,g1,b1) and (r2,g2,b2):
L1: abs(r1-r2) + abs(g1-g2) + abs(b1-b2)
L2: sqrt((r1-r2)² + (g1-g2)² + (b1-b2)²)
L∞: max(abs(r1-r2), abs(g1-g2), abs(b1-b2))
These however don't take into account the fact that human vision is less sensitive to color than to brightness. For optimal results you should convert from RGB to a color space that encodes brightness and color separately. Then use one of the above metrics in the new color space, possibly giving more weight to the brightness component and less to the color components.
Areas of color that are indistinguishable form each other are called MacAdam ellipses. The ellipses become nearly circular in the CIELUV and CIELAB color spaces, which is great for computation, but unfortunately going from RGB into these color spaces is not so simple.
JPEG converts colors into YCbCr, where Y is brightness and the two C's encode color, and then halves the resolution of the C components. You could do the same and then use a weighed version of one of the above metrics, for example:
diff = sqrt(1.4*sqr(y1-y2) + .8*sqr(cb1-cb2) + .8*sqr(cr1-cr2))
The article on color difference in wikipedia has more examples for different color spaces.
Perceptual color difference can be calculated using the The CIEDE2000 Color-Difference Formula. The CIEDE2000 formula is based on the LCH color space (Luminosity, Chroma, and Hue). LCH color space is represented as a cylinder (see image here).
A less accurate (but more manageable) model, is the CIE76 Color-Difference formula, which is based on the Lab color space ( L*a*b*). There are no simple formulas for conversion between RGB or CMYK values and L*a*b*, because the RGB and CMYK color models are device dependent. The RGB or CMYK values first need to be transformed to a specific absolute color space, such as sRGB or Adobe RGB. This adjustment will be device dependent, but the resulting data from the transform will be device independent, allowing data to be transformed to the CIE 1931 color space and then transformed into L*a*b*. This article explains the procedure and the formulas.
RGB color system is designed such that if 2 colors have values that are close to each other then the colors are also perceptually close.
Example:
color defined by RGB = (100, 100, 100) is perceptually almost the same as colors
RGB = (101, 101, 100), RGB = (98, 100, 99) etc...
I have a matrix with floating-point pixel coordinates and corresponding matrix of greyscale values in this floating-point pixel coordinates. I need to remap an image from floating-point pixel coordinates to the regular grid. The cv::remap function from opencv transforms a source image like this:
dst(x,y) = src(mapx(x,y), mapy(x,y))
In my case I have something like this:
dst(mapx(x,y), mapy(x,y)) = src(x,y)
From the equation above I need to determine destination image (dst(x,y)).
Is there an easy way in OpenCv to perform such remapping or can you suggest any other open source image processing library to solve the problem?
Take the four corners of your picture.
Extract their correspondent in the dst image. Store them in two point vectors: std::vector<cv::Point> dstPts, srcPts.
extract the geometric relation between them with cv::findHomography(dstPts, srcPtrs,...)
apply cv::warpPerspective(). Internally, it calculates and applies the correct remapping
It works if the transform defined in your maps is a homographic transform. It doesn't work if it's some swirling, fisheye effect, lens correction map, etc.
How to convert HSV color directly to CMYK color?
Bonus points for mentioning JavaScript library that does that.
I've seen only solutions that convert HSV to RGB and then RGB to CMYK.
The only solution I'm aware of is to convert to RGB as a middle tier and then convert it out to the format you want (CMYK->RGB->HSV Or HSV->RGB->CMYK) like you mentioned. I'm not sure if it's something to do with the math or another reason entirely but here is a library from the web tool kit that will at least let you get the conversion done.
A little more reading on my part turned up this:
HSL and HSV are defined purely with
reference to some RGB space, they are
not absolute color spaces: to specify
a color precisely requires reporting
not only HSL or HSV values, but also
the characteristics of the RGB space
they are based on, including the gamma
correction in use.
Source
Essentially from what I can gather HSV and HSL can't be directly converted because they're not absolute colour spaces as they need elements of RGB space that they are based upon to be meaningful. Now I'm not a color expert but I would venture that this could be why you can't directly convert between HSV and CMYK and I would assume that this is the process that goes on under the covers of conversion engines (like the web based ones) that seem to convert directly.