Difference between V # HSV space and Y # YUV space - image

Seems like that both V # HSV and Y # YUV represents brightness? Which one is better for brightness representation? or any other better indicator for measurement of image brightness?

The Y in YUV according to wikipedia is the sum of the three RGB components multiplied with some constants:
Y = 0.299 * R + 0.587 * G + 0.114 * B
The V in HSV according to wikipedia is the value is defined as the largest component of a color:
V = M = max(R, G, B)
The answer is thus no, they are not the same.
EDIT
The two color spaces have two different backgrounds, the HSV model rearranges the geometry of RGB in an attempt to be more intuitive and perceptually relevant than the cartesian (cube) representation. And the YUV model encodes a color image or video taking human perception into account, allowing reduced bandwidth for chrominance components, thereby typically enabling transmission errors or compression artifacts to be more efficiently masked by the human perception than using a "direct" RGB-representation.
So more suitable or better representation depends on your application.

Related

Image processing using convolution

Well, I was trying out convolution on grey scale images, but then when I searched for convolution on rgb images, I couldn't find satisfactory explanation. How to apply convolution to rgb images?
A linear combination of vectors can be computed by linearly combining corresponding vector elements:
a * [x1, y1, z1] + b * [x2, y2, z2] = [a*x1+b*x2, a*y1+b*y2 , a*z1+b*z2]
Because a convolution is a linear operation (i.e. you weight each pixel within a neighborhood and add up the results), it follows that you can apply a convolution to each of the RGB channels independently (e.g. using MATLAB syntax):
img = imread(...);
img(:,:,1) = conv2(img(:,:,1),kernel);
img(:,:,2) = conv2(img(:,:,2),kernel);
img(:,:,3) = conv2(img(:,:,3),kernel);
You can look at this in two different ways: First, you may convert the color image into an intensity image with a normal vector. The most applicable one is (.299, .587, .114) which is the natural gray scale conversion. To get intensity you need to convert I = .299*R + .587*G + .114*B.
If you are designing your own convolutional network and intend to keep the color channels as inputs, just treat colored image as a 4D tensor with 3 channels. For example if you have a (h x w) image, the tensor size is (1 x h x w x 3) and you may use a filter of size (kh x kw x 3 x f) which kh and kw are your filter sizes and f is the required output features.

Image filtering without built in function matlab [duplicate]

I have the following code in MATLAB:
I=imread(image);
h=fspecial('gaussian',si,sigma);
I=im2double(I);
I=imfilter(I,h,'conv');
figure,imagesc(I),impixelinfo,title('Original Image after Convolving with gaussian'),colormap('gray');
How can I define and apply a Gaussian filter to an image without imfilter, fspecial and conv2?
It's really unfortunate that you can't use the some of the built-in methods from the Image Processing Toolbox to help you do this task. However, we can still do what you're asking, though it will be a bit more difficult. I'm still going to use some functions from the IPT to help us do what you're asking. Also, I'm going to assume that your image is grayscale. I'll leave it to you if you want to do this for colour images.
Create Gaussian Mask
What you can do is create a grid of 2D spatial co-ordinates using meshgrid that is the same size as the Gaussian filter mask you are creating. I'm going to assume that N is odd to make my life easier. This will allow for the spatial co-ordinates to be symmetric all around the mask.
If you recall, the 2D Gaussian can be defined as:
The scaling factor in front of the exponential is primarily concerned with ensuring that the area underneath the Gaussian is 1. We will deal with this normalization in another way, where we generate the Gaussian coefficients without the scaling factor, then simply sum up all of the coefficients in the mask and divide every element by this sum to ensure a unit area.
Assuming that you want to create a N x N filter, and with a given standard deviation sigma, the code would look something like this, with h representing your Gaussian filter.
%// Generate horizontal and vertical co-ordinates, where
%// the origin is in the middle
ind = -floor(N/2) : floor(N/2);
[X Y] = meshgrid(ind, ind);
%// Create Gaussian Mask
h = exp(-(X.^2 + Y.^2) / (2*sigma*sigma));
%// Normalize so that total area (sum of all weights) is 1
h = h / sum(h(:));
If you check this with fspecial, for odd values of N, you'll see that the masks match.
Filter the image
The basics behind filtering an image is for each pixel in your input image, you take a pixel neighbourhood that surrounds this pixel that is the same size as your Gaussian mask. You perform an element-by-element multiplication with this pixel neighbourhood with the Gaussian mask and sum up all of the elements together. The resultant sum is what the output pixel would be at the corresponding spatial location in the output image. I'm going to use the im2col that will take pixel neighbourhoods and turn them into columns. im2col will take each of these columns and create a matrix where each column represents one pixel neighbourhood.
What we can do next is take our Gaussian mask and convert this into a column vector. Next, we would take this column vector, and replicate this for as many columns as we have from the result of im2col to create... let's call this a Gaussian matrix for a lack of a better term. With this Gaussian matrix, we will do an element-by-element multiplication with this matrix and with the output of im2col. Once we do this, we can sum over all of the rows for each column. The best way to do this element-by-element multiplication is through bsxfun, and I'll show you how to use it soon.
The result of this will be your filtered image, but it will be a single vector. You would need to reshape this vector back into matrix form with col2im to get our filtered image. However, a slight problem with this approach is that it doesn't filter pixels where the spatial mask extends beyond the dimensions of the image. As such, you'll actually need to pad the border of your image with zeroes so that we can properly do our filter. We can do this with padarray.
Therefore, our code will look something like this, going with your variables you have defined above:
N = 5; %// Define size of Gaussian mask
sigma = 2; %// Define sigma here
%// Generate Gaussian mask
ind = -floor(N/2) : floor(N/2);
[X Y] = meshgrid(ind, ind);
h = exp(-(X.^2 + Y.^2) / (2*sigma*sigma));
h = h / sum(h(:));
%// Convert filter into a column vector
h = h(:);
%// Filter our image
I = imread(image);
I = im2double(I);
I_pad = padarray(I, [floor(N/2) floor(N/2)]);
C = im2col(I_pad, [N N], 'sliding');
C_filter = sum(bsxfun(#times, C, h), 1);
out = col2im(C_filter, [N N], size(I_pad), 'sliding');
out contains the filtered image after applying a Gaussian filtering mask to your input image I. As an example, let's say N = 9, sigma = 4. Let's also use cameraman.tif that is an image that's part of the MATLAB system path. By using the above parameters, as well as the image, this is the input and output image we get:

What grayscale conversion algorithm does OpenCV cvtColor() use?

When converting an image in OpenCV from color to grayscale, what conversion algorithm is used? I tried to look this up in the source code on GitHub, but I did not have any success.
The lightness method averages the most prominent and least prominent colors:
(max(R, G, B) + min(R, G, B)) / 2.
The average method simply averages the values:
(R + G + B) / 3.
The luminosity method is a more sophisticated version of the average method. It also averages the values, but it forms a weighted average to account for human perception. We’re more sensitive to green than other colors, so green is weighted most heavily.
The formula for luminosity is 0.21 R + 0.72 G + 0.07 B.
Here is an example of some conversion algorithms:
http://www.johndcook.com/blog/2009/08/24/algorithms-convert-color-grayscale/
The color to grayscale algorithm is stated in the cvtColor() documentation. (search for RGB2GRAY).
The formula used is the same as for CCIR 601:
Y = 0.299 R + 0.587 G + 0.114 B
The luminosity formula you gave is for ITU-R Recommendation BT. 709. If you want that you can specify CV_RGB2XYZ (e.g.) in the third parameter to cvtColor() then extract the Y channel.
You can get OpenCV to to do the "lightness" method you described by doing a CV_RGB2HLS conversion then extract the L channel. I don't think that OpenCV has a conversion for the "average" method,
but if you explore the documentation you will see that there are a few other possibilities.
Just wanted to point out that:
img = cv2.imread(rgbImageFileName)
b1 = img[:,:,0] # Gives **Blue**
b2 = img[:,:,1] # Gives Green
b3 = img[:,:,2] # Gives **Red**
On the other hand, loading it as a numeric array works fine:
imgArray= gdalnumeric.LoadFile(rgbImageFileName)
Red = imgArray[0, :, :].astype('float32')
Green = imgArray[1, :, :].astype('float32')
Blue = imgArray[2, :, :].astype('float32')
So just watch out for these oddities.
But when converting to Grayscale cv2.cvtColor uses the the bands correctly.
I compared pixel values using Matlab's rgb2gray.
Cheers

Color tint and temperature

Though I have found a lot of topics on color tint and temperature, but till now I have not seen any definite solution, which is the reason I am creating this post..My apologies for that.
I am interested in adjusting color temp and tint in images from RGB values, somewhat similar to the iPhoto application found in iOS where it can be adjusted with a slider bar from left to right.
Whatever I have found, temp and tint are orthogonal properties, where temp adjustment is along the blue (left; cool colors)--yellow(right; warm colors) and tint along the green (left) -- magenta (right) axis.
How do I adjust them using formulas from RGB values i.e., uderlying implementation of the color temp and tint slider bars.
I can convert them to HSV space and then I can rotate the hue wheel channel towards those (blue, yello, green, magenta) angles, but how to do them in a systematic fashion similar to the slider bar implementation by changing gradually from low level (middle of the slider bar) to high level (right/left ends of the slider bar).
Thanks!
You should try using HSL instead of HSV. HSL saturation separates itself from the hue and luminosity has very definitive range when it comes to mathematical calculation.
In HSL, to add tint you move the L factor between 50-100 and to add shade the L factor varies between 0-50. Also saturation for HSL controls the tone directly unlike HSV.
For temperature, you have to devise your own stratagy changing the color between red and blue but one golden hint that I can give you is "every pure RGB color has one of 3 color values as zero, second fixed to 255 and 3rd varies with the factor of 255/60.
Hope this helps-
Whereas color temparature is a physical value, its expression
in terms of RGB values
not
trivial. If all you need is a pair of orthogonal axes in the RGB colorspace for the visual adjustment of white balance, they can be defined with relative ease in such a way as to resemble the true color temperature and its derivative the tint.
Let us name our RGB temperature BY—for the balance between blue and yellow, and our RGB tint GR—for the balance balance between green and red. Now, these functions must satisfy the following obvious requirements:
They shall not depend on brightness, or be invariant to multiplication of all the RGB components by the same factor:
BY(r,g,b) = BY(kr, kg, kb),
GR(r,g,b) = GR(kr, kg, kb).
They shall be zero for neutral gray:
BY(0,0,0) = 0,
GR(0,0,0) = 0.
They shall belong the to same range, symmetrical around zero point. I will use [-1..+1]
Any combination of BY and GR shall define a valid color.
Now, one of the ways to define them could be:
BY = (r + g - 2b)/(r + g + 2b),
GR = (r - g )/(r + g) .
so that each pair of BY and GR determines a specific proportion
r:g:b = (1 + BY)(1 + GR)
(1 + BY)(1 - GR)
1 - BY
The following image shows the colors of maximum brightness on our BY-GR plane. BY is directed right, GR down, and the neutral point (0,0) is at the center:
Proper
adjustment of white balance consists of multiplication of the linear RGB values by individual factors:
r_new = wb_r * r_old
g_new = wb_g * g_old
b_new = wb_b * b_old
It happens to work on gamma-compressed RGB too, but not so well on sRGB, because of a
piece-wise
definition of its transfer function, but the distortion will be small and often unnoticeable. If you want a perfect adjustment, however, make sure to work in linear RGB.
Once a BY-GR pair is chosen and the corresponding RGB proportion calculated, only one degree of freedom remains—the overall multiplier (see req. 1). Choose it so that no pixels become clipped.

Ruby, Generate a random hex color (only light colors)

I know this is possible duplicated question.
Ruby, Generate a random hex color
My question is slightly different. I need to know, how to generate the random hex light colors only, not the dark.
In this thread colour lumincance is described with a formula of
(0.2126*r) + (0.7152*g) + (0.0722*b)
The same formula for luminance is given in wikipedia (and it is taken from this publication). It reflects the human perception, with green being the most "intensive" and blue the least.
Therefore, you can select r, g, b until the luminance value goes above the division between light and dark (255 to 0). For example:
lum, ary = 0, []
while lum < 128
ary = (1..3).collect {rand(256)}
lum = ary[0]*0.2126 + ary[1]*0.7152 + ary[2]*0.0722
end
Another article refers to brightness, being the arithmetic mean of r, g and b. Note that brightness is even more subjective, as a given target luminance can elicit different perceptions of brightness in different contexts (in particular, the surrounding colours can affect your perception).
All in all, it depends on which colours you consider "light".
Just some pointers:
Use HSL and generate the individual values randomly, but keeping L in the interval of your choosing. Then convert to RGB, if needed.
It's a bit harder than generating RGB with all components over a certain value (say 0x7f), but this is the way to go if you want the colors distributed evenly.
-- I found that 128 to 256 gives the lighter colors
Dim rand As New Random
Dim col As Color
col = Color.FromArgb(rand.Next(128, 256), rand.Next(128, 256), rand.Next(128, 256))
All colors where each of r, g ,b is greater than 0x7f
color = (0..2).map{"%0x" % (rand * 0x80 + 0x80)}.join
I modified one of the answers from the linked question (Daniel Spiewak's answer) to come up with something that is pretty flexible in terms of excluding darker colors:
floor = 22 # meaning darkest possible color is #222222
r = (rand(256-floor) + floor).to_s 16
g = (rand(256-floor) + floor).to_s 16
b = (rand(256-floor) + floor).to_s 16
[r,g,b].map {|h| h.rjust 2, '0'}.join
You can change the floor value to suit your needs. A higher value will limit the output to lighter colors, and a lower value will allow darker colors.
A really nice solution is provided by the color-generator gem, where you can call:
ColorGenerator.new(saturation: 0.75, lightness: 0.5).create_hex

Resources