I would like to convert 10 bit, bt.2020 YUV color image to XYZ color components. Are there anybody help me for this?
Also, is Y components in YUV and L component in Lab same?
According to this document, Y in YUV is same as Y in CIE XYZ space. However, L in CIE LAB space has a nonlinear relation with Y. You can check the relation in the same document, equation 19.
So the short answer to your question is no. Also, for colorspace conversion, I prefer this library.
I know this is an old question but I have recently come across the same problem and only found a hint of what needs to be done by accident (a reference in a google book search sample. P251 of "Digital Video and HD: Algorithms and Interfaces").
This is my understanding of what needs to be done, so it could be incorrect and/or incomplete:
Calculate the Normalised Primary Matrix (NPM) using SMPTE RP-177 section 3.3 in combination with the colour primaries (CIE 1931 chromaticity coordinates) of the colour space (e.g. Rec.709). There is a worked example in Annex B.1.
Convert the YUV (YCbCr) values to RGB using the matrix coefficients of the colour space.
Then apply the NPM to the RGB values to calculate the XYZ coordinates.
Related
I just imported an image taken from my iphone 7 onto matlab. It turns out that the image has 3d size instead of 2d.
boxImage1 = imread('IMG_5175.jpg');
boxImage1 480x640x3 921600 uint8
Can anyone explain why the size of image is in 3d instead of just two. I am trying to run object detection tools on a set of images to extract relevant objects.
Thanks,
As pointed out in the comments, the three dimensions corresponds with the R, G and B channels. Have a look into the matlab documentation:
If the file contains a truecolor image, then A is an m-by-n-by-3 array.
Converting it to grayscale, using rgb2gray, is often a good idea, but it may depend on your application:
I = rgb2gray(boxImage1); % 480x640 matrix
I am trying to convert an RGB to the perceptually uniform color space, CIELAB. Wikipedia states:
"The RGB or CMYK values first must be transformed to a specific
absolute color space, such as sRGB or Adobe RGB. This adjustment will
be device-dependent, but the resulting data from the transform will be
device-independent, allowing data to be transformed to the CIE 1931
color space and then transformed into L*a * b*."
I know there are some straightforward transformations once converting to sRGB, but I have not found any material to go from RGB to sRGB. So, what methods exist to do such a conversion?
No, you should not go from (linear) RGB to sRGB. In fact, it is the other way round. Following are the steps:
Convert sRGB into linear RGB. sRGB image is a gamma encoded which means a camera applies gamma function pow(x, 1/2.2) onto the light signal. This sRGB is in gamma-space which is non-linear.
Now, converting linear RGB to LAB involves two steps: first is converting linear RGB to XYZ color space (this is a basic color-space). This conversion is a linear operation, i.e., matrix multiplication. This is the reason why you would need linear RGB values not sRGB. It needs to be in linear space. Finally, XYZ values are converted into LAB values through a non-linear operation which contains some standard formulas (which you don't need to be worried about).
Interesting links:
(i) Understanding sRGB and linear RGB space: http://filmicgames.com/archives/299; http://www.cambridgeincolour.com/tutorials/gamma-correction.htm
(ii) MATLAB tutorial: https://de.mathworks.com/help/vision/ref/colorspaceconversion.html
(iii) Python package: http://pydoc.net/Python/pwkit/0.2.1/pwkit.colormaps/
(iv) C code: http://svn.int64.org/viewvc/int64/colors/color.c?view=markup
(v) OpenCV does not do this sRGB to linear RGB conversion but it does the conversion inside color.cpp code (OpenCV_DIR\modules\imgproc\src\color.cpp). Check out method called initLabTabs(), there is a gamma encoding and decoding. OpenCV color conversion API: http://docs.opencv.org/3.1.0/de/d25/imgproc_color_conversions.html
I am working on an image processing project and I have to use entopyfilt (from matlab).
I researched and found some information to do it but not enough. I can calculate the entropy value of an image, but I don't know how to write an entropy filter. There is a similar question in the site, but I also didn't understand it.
Can anybody help me to understand entropy filter?
From the MATLAB documentation:
J = entropyfilt(I) returns the array J, where each output pixel contains the entropy value of the 9-by-9 neighborhood around the corresponding pixel in the input image I. I can have any dimension. If I has more than two dimensions, entropyfilt treats it as a multidimensional grayscale image and not as a truecolor (RGB) image. The output image J is the same size as the input image I.
For each pixel, you look at the 9 by 9 area around the pixel and calculate the entropy. Since the entropy calculation is a nonlinear calculation, it is not something you can do with a simple kernel filter. You have to loop over each pixel and do the calculation on a per-pixel basis.
Folks,
I have read a number of articles on Discrete Wavelet Transform (DWT) and looked at some sample code as well. However, I am not clear on what exactly does DWT achieve.
Here is what I understand. For a two dimensional image in YUV format, I can pass in the Y plane (brightness) to DWT function as a parameter. The function returns me a matrix of the original width and height containing coefficient values.
What are these coefficient values telling me? Is it how fast or slow the brightness of a pixel has changed compared to its neighbors?
Further, the returned matrix is rearranged in four quarters. As the coefficients have been rearranged, I no longer know which coefficient belongs to which pixel. This is confusing. If I cannot associate the coefficient to its corresponding pixel location, how can I really use the coefficients?
A little bit of background. I am looking at hiding some information in an image as an invisible watermark. From what I understand, DWT can help me identify the best region to hide the information. However, I have not been able to put the whole picture together.
Ok. I figured out how DWT works. I was under the assumption that the coefficients generated have a relationship with the original image. However, the transform converts the input luma into a completely different set. It is possible to run the reverse transform on the new values to once again obtain the original values.
Regards,
Peter
I've been looking around for a simple algorithm to get and set the brightness of a pixel, but can't find anything - only research papers and complex libraries.
So does anyone know what is the formula to calculate the brightness of a pixel? And which formula should I use to change the brightness?
Edit: to clarify the question. I'm using Qt with C++ but I'm mainly looking for a generic math formula - I will adapt it to the language. I'm talking about RGB pixels of an image in memory. By "brightness", I mean the same as in Photoshop - changing the brightness makes the image more "white" (a brightness value of 1.0 is completely white), decreasing it makes it more "black" (value of 0.0).
Change the color representation to HSV. The V component stands for value and represents the brightness!
Here the algorithm implemented in PHP.
Here is a description of how to do it in C.
What do you mean by a pixel?
You can set the brightness of a pixel in an image with '=' you just need to know the memory layout of the image
To set a pixel on the screen is a little more complicated