Matlab stores the image as a 3-dimensional array. The first two dimensions correspond to the numbers on the axis of the above picture. Each pixel is represented by three entries in the third dimension of the image. Each of the three layers represents the intensity of red, green, and blue in the array of pixels. We can extract out the independent red-green-blue components of the image by:
redChannel = rgbImage(:, :, 1);
greenChannel = rgbImage(:, :, 2);
blueChannel = rgbImage(:, :, 3);
For example original image is:
If you display red green and blue channels you get these grayscaled images:
If you concatenate one of these channels with two black matrices (zero matrices) you get colored image. Let us concatenate each channel with black image matrices for remaining channels:
blackImage = uint8(zeros(rows, columns));
newRedChannel = cat(3, redChannel, blackImage, blackImage);
newGreenChannel = cat(3, blackImage, greenChannel, blackImage);
newBlueChannel = cat(3, blackImage, blackImage, blueChannel);
It outputs the following images:
Why does it work this way? Why individual channels for each color have to be concatenated with zero matrices (black images) for remaining channels in order for it be be colored when displaying? And why individual channels of colors are actually just grayscaled images if displayed individually?
In MATLAB, greyscale images are represented as 2D matrices. Colour images as 3D matrices. A red image is still a colour image hence it still has to be a 3D image. Since is it a purely red image and thus has no other colours, the green and blue channels should be empty hence matrices of zeros.
Note also that when we say greyscale, we are really referring to an indexed image. So you can make this a colour image by applying a colourmap. If you were to apply a colourmap that ranges from black to red, then your 2D matrix would display as your red image above. The question is, how would MATLAB know what colourmap to apply? Yes, this does take up less space than a colour image. But it increases the complexity of your code.
On the other, ask yourself what you would expect an image to look like if you set two of the colour channels to zero? The only logical answer is exactly the separate colour channel images you have created above.
If you like, try an rephrase your question to "how else could MATLAB have implemented this?". In other words, if your red channel was a 2D image, how would MATLAB know it is a red channel and not a green channel or a greyscale image? Challenge yourself to think of a more practical way to represent these data. I think this exercise will convince you of the merit's of MATLAB's design choice in this matter.
Also it is worth noting that many file formats, e.g. .bmp, work the same way.
Related
I have segmented image in which my region of interest (ROI) was white color cotton. Now I want to compare the number of pixels in segmented area i.e. total number of pixels in white blob in binary image with actual number of pixels of ROI in actual image. How I can do that. Following figure can clear the point.
As we can see from original image, my ROI was white color cotton circled in red boundry. When I segmented this image I got binary image as shown. As we can noticed there are some missing areas in binary image as compare to original area. So, I want to count the number of pixels in original image of ROI and number of pixels of white blob in binary image. So that I can calculate difference in actual pixels of ROI and actual segmented number of pixels.
Thank You.
If you wish to not draw the boundaries yourself, you can try this. It might not be as precise as you need, but you might get close to the actual value by tweaking with the thresholding values I used (100 for all 3 channels in this case).
Assume I is your original image. First create the binary mask by thresholding with the RGB values. Then remove all the small objects that don't have at least a 2000 pixel area. Then sum up the pixels of that object.
IT = I(:,:,1) > 100;
IT(I(:,:,2) < 100) = 0;
IT(I(:,:,3) < 100) = 0;
IT = bwareaopen(IT, 2000);
sum(IT(:) > 0)
21380
Resulting image:
I have a simple green fluorescent image. I want to find the total number of pixels that are above a specific value using MATLAB. I don't know where the pixel values are stored in an image.
Here is the green fluorescent image. I want to know which percentage of the pixels have value of more than a specific threshold. For example in this image, if the pixel value in the cells are around X, then I want to find the total number of pixels that are above X.
If you read a colored image using imread, you get a 3D matrix in which the first two indices are the image coordinates; (row, columns); and the last index represents the color channels. For the typical use case of an RGB image, the color channels are:
1 = red
2 = green
3 = blue.
Other possibilities are grayscale, CMYK and indexed images. Please check the official documentation for more information.
I have segmentation masks with indexed colors. Unfortunately there is (colored) noise at the edges of objects. At the transition from one color region to the next, there are small pixel regions in different colors, separating the two color regions (caused by converting transparent pixels at the edges).
I want to remove this noise (with MATLAB) by assigning a color of one of the neighboring large regions. It doesn't matter, which one - main thing is to remove the small areas.
It can be assumed that small regions of ANY color may be removed in this way (reassign to neighboring large region).
In case of a binary image, I could use bwareaopen (suggested in this Q&A: Remove small chunks of labels in an image). Converting the image to a binary image for each color might be a workaround, however this is costly (for many colors) and leaves the question of reassignment open. I hope there are more elegant ways to do this.
Check the following:
Convert RGB to indexed image.
Apply median filter on indexed map.
Convert back to RGB
RGB = imread('GylzKm.png');
%Convert RGB to indexed image with 4 levels
[X, map] = rgb2ind(RGB, 4);
%Apply median filter on 4 levels images
X = medfilt2(X, [5, 5]);
%Convert indexed image back to RGB.
J = ind2rgb(X, map);
figure;imshow(J);
The black border is a little problematic.
I have more then 1 week reading about selective color change of an image. It meand selcting a color from a color picker and then select a part of image in which I want to change the color and apply the changing of color form original color to color of color picker.
E.g. if I select a blue color in color picker and I also select a red part in the image I should be able to change red color to blue color in all the image.
Another example. If I have an image with red apples and oranges and if I select an apple on the image and a blue color in the color picket, then all apples should be changing the color from red to blue.
I have some ideas but of course I need something more concrete on how to do this
Thank you for reading
As a starting point, consider clustering the colors of your image. If you don't know how many clusters you want, then you will need methods to determine whether to merge or not two given clusters. For the moment, let us suppose that we know that number. For example, given the following image at left, I mapped its colors to 3 clusters, which have the mean colors as shown in the middle, and representing each cluster by its mean color gives the figure at right.
With the output at right, now what you need is a method to replace colors. Suppose the user clicks (a single point) somewhere in your image, then you know the positions in the original image that you will need to modify. For the next image, the user (me) clicked on a point that is contained by the "orange" cluster. Then he clicked on some blue hue. From that, you make a mask representing the points in the "orange" cluster and play with that. I considered a simple gaussian filter followed by a flat dilation 3x5. Then you replace the hues in the original image according to the produced mask (after the low pass filtering, the values on it are also considered as a alpha value for compositing the images).
Not perfect at all, but you could have a better clustering than me and also a much-less-primitive color replacement method. I intentionally skipped the details about clustering method, color space, and others, because I used only basic k-means on RGB without any pre-processing of the input. So you can consider the results above as a baseline for anything else you can do.
Given the image, a selected color, and a target new color - you can't do much that isn't ugly. You also need a range, some amount of variation in color, so you can say one pixel's color is "close enough" while another is clearly "different".
First step of processing: You create a mask image, which is grayscale and varying from 0.0 to 1.0 (or from zero to some maximum value we'll treat as 1.0), and the same size as the input image. For each input pixel, test if its color is sufficiently near the selected color. If it's "the same" or "close enough" put 1.0 in the mask. If it's different, put 0.0. If is sorta borderline, put an in-between value. Exactly how to do this depends on the details of the image.
This might work best in LAB space, and testing for sameness according to the angle of the A,B coordinates relative to their origin.
Once you have the mask, put it aside. Now color-transform the whole image. This might be best done in HSV space. Don't touch the V channel. Add a constant to S, modulo 360deg (or mod 256, if S is stored as bytes) and multiply S by a constant chosen so that the coordinates in HSV corresponding to the selected color is moved to the HSV coordinates for the target color. Convert the transformed S and H, with the unchanged L, back to RGB.
Finally, use the mask to blend the original image with the color-transformed one. Apply this to each channel - red, green, blue:
output = (1-mask)*original + mask*transformed
If you're doing it all in byte arrays, 0 is 0.0 and 255 is 1.0, and be careful of overflow and signed/unsigned problems.
in Matlab,
if I do:
output = false(5, 5);
imshow(output);
it will show me a black square instead of a white binary square image. Is there any reason to this? How can I output a white binary square?
The reason is that false is mapped to 0, and true is mapped to 1.
Also, when showing images, higher number is shown by higher intensity. White has more intensity than black.
Another way to think about it, is that usually you have 256 values - 0-255. 0 is totally black and 255 is totally white. Now, imagine that you do a quantization to two colors. It is now obvious that 0 should be black.
In order to show white square, use
output = true(5,5)
You could use imcomplement
imshow(imcomplement(false(5, 5)))
or modify the default color mapping (quoting from imshow's documentation)
imshow(X,map)
displays the indexed image X with the colormap map. A color map matrix may have any number of rows, but it must have exactly 3 columns. Each row is interpreted as a color, with the first element specifying the intensity of red light, the second green, and the third blue. Color intensity can be specified on the interval 0.0 to 1.0.
You could also change the figure's colormap to customize how MATLAB maps values to colors:
BW = [false,true;true,false];
imshow(BW)
set(gcf, 'Colormap',[1,1,1;0,0,0])