not getting glcm matrix dimensions for 8 bit grayscale image - image

As theory states, a glcm matrix is said to have dimensions of 2^x by 2^x where x is the grayscale depth of the image. My problem is that I get a 8 by 8 matrix instead of a 2^8 By 2^8 matrix when I run it on a 8 bit grayscale image.
Could someone please help me out?

According to MATLAB documentation,
graycomatrix calculates the GLCM from a scaled version of the image.
By default, if I is a binary image, graycomatrix scales the image to
two gray-levels. If I is an intensity image, graycomatrix scales the
image to eight gray-levels. You can specify the number of gray-levels
graycomatrix uses to scale the image by using the 'NumLevels'
parameter, and the way that graycomatrix scales the values using the
'GrayLimits' parameter — see Parameters.
In short, you need to run the function as follows:
glcm = graycomatrix(I , 'NumLevels' , 2^8 );

Related

Matlab ROI Image Processing Approach

I currently have images of the following nature:
The goal is to have the code display the mean value of each of the squares. The position of each square slightly shifts from image to image. The images are stored as 1024 x 1024 matrices (type = double). Any suggestions on what approach to take in this case?
Thank you for your time!

Convert RGB to color scale defined by any two colors [duplicate]

I am doing some image processing and I needed to reduce the number of colors of an image. I found that rgb2ind could do that and wrote the following snippet:
clc
clear all
[X,map] = rgb2ind(RGB,6,'nodither');
X = rgb2ind(RGB, map);
rgb=ind2rgb(X,map);
rgb_uint8=uint8(rgb*255+0.5);
imshow(rgb_uint8);
But the output looks like this and I doubt there are only 6 colors in it.
It may perceptually look like there is more than 6 colours, but there is truly 6 colours. If you take a look at your map variable, it will be a 6 x 3 matrix. Each row contains a colour that you want to quantize your image to.
To double check, convert this image into a grayscale image, then do a histogram of this image. If rgb2ind worked, you should only see 6 spikes in the histogram.
BTW, to be able to reconstruct your problem, you used the peppers.png image that is built-in to MATLAB's system path. As such, this is what I did to describe what I'm talking about:
RGB = imread('peppers.png');
%// Your code
[X,map] = rgb2ind(RGB,6,'nodither');
X = rgb2ind(RGB, map);
rgb=ind2rgb(X,map);
rgb_uint8=uint8(rgb*255+0.5);
imshow(rgb_uint8);
%// My code - Double check colour distribution
figure;
imhist(rgb2gray(rgb_uint8));
axis tight;
This is the figure I get:
As you can see, there are 6 spikes in our histogram. If there are truly 6 unique colours when you ran your code, then there should be an equivalent of 6 equivalent grayscale intensities when you convert the image into grayscale, and the histogram above verifies our findings.
As such, you are quantizing your image to 6 colours, but it doesn't look like it due to quantization noise of your image.
Don't doubt of your result, the image contains exactly 6 colours.
As explained in the Matlab documentation, the rgb2ind function returns an indexed matrix (X in your code) and a colormap (map in your code). So if you want to check the number of colours in X, you can simply check the size of the colormap: size(map)
In your case the size will be 6x3: 6 colours described on 3 channels (red, greed and blue).

what is the difference between image vs imagesc in matlab

I want to know the difference between imagesc & image in matlab
I used this example to try to figure out the difference beween the two but i couldn't explain the difference in the output images by myself; could you help me with that ?
I = rand(256,256);
for i=1:256
for j=1:256
I(i,j) = j;
end
end
figure('Name','Comparison between image et imagesc')
subplot(2,1,1);image(I);title('using image(I)');
subplot(2,1,2);imagesc(I);title('using imagesc(I)');
figure('Name','gray level of image');
image(I);colormap('gray');
figure('Name','gray level of imagesc');
imagesc(I);colormap('gray');
image displays the input array as an image. When that input is a matrix, by default image has the CDataMapping property set to 'direct'. This means that each value of the input is interpreted directly as an index to a color in the colormap, and out of range values are clipped:
image(C) [...] When C is a 2-dimensional MxN matrix, the elements of C are used as indices into the current colormap to determine the color. The
value of the image object's CDataMapping property determines the
method used to select a colormap entry. For 'direct' CDataMapping (the default), values in C are treated as colormap indices (1-based if double, 0-based if uint8 or uint16).
Since Matlab colormaps have 64 colors by default, in your case this has the effect that values above 64 are clipped. This is what you see in your image graphs.
Specifically, in the first figure the colormap is the default parula with 64 colors; and in the second figure colormap('gray') applies a gray colormap of 64 gray levels. If you try for example colormap(gray(256)) in this figure the image range will match the number of colors, and you'll get the same result as with imagesc.
imagesc is like image but applying automatic scaling, so that the image range spans the full colormap:
imagesc(...) is the same as image(...) except the data is scaled to use the full colormap.
Specifically, imagesc corresponds to image with the CDataMapping property set to 'scaled':
image(C) [...] For 'scaled' CDataMapping, values in C are first scaled according to the axes CLim and then the result is treated as a colormap index.
That's why you don't see any clipping with imagesc.

entropyfilt in OpenCV

I am working on an image processing project and I have to use entopyfilt (from matlab).
I researched and found some information to do it but not enough. I can calculate the entropy value of an image, but I don't know how to write an entropy filter. There is a similar question in the site, but I also didn't understand it.
Can anybody help me to understand entropy filter?
From the MATLAB documentation:
J = entropyfilt(I) returns the array J, where each output pixel contains the entropy value of the 9-by-9 neighborhood around the corresponding pixel in the input image I. I can have any dimension. If I has more than two dimensions, entropyfilt treats it as a multidimensional grayscale image and not as a truecolor (RGB) image. The output image J is the same size as the input image I.
For each pixel, you look at the 9 by 9 area around the pixel and calculate the entropy. Since the entropy calculation is a nonlinear calculation, it is not something you can do with a simple kernel filter. You have to loop over each pixel and do the calculation on a per-pixel basis.

RGB image to binary image

I want to load an RGB image in MATLAB and turn it into a binary image, where I can choose how many pixels the binary image has. For instance, I'd load a 300x300 png/jpg image into MATLAB and I'll end up with a binary image (pixels can only be #000 or #FFF) that could be 10x10 pixels.
This is what I've tried so far:
load trees % from MATLAB
gray=rgb2gray(map); % 'map' is loaded from 'trees'. Convert to grayscale.
threshold=128;
lbw=double(gray>threshold);
BW=im2bw(X,lbw); % 'X' is loaded from 'trees'.
imshow(X,map), figure, imshow(BW)
(I got some of the above from an internet search.)
I just end up with a black image when doing the imshow(BW).
Your first problem is that you are confusing indexed images (which have a colormap map) and RGB images (which don't). The sample built-in image trees.mat that you load in your example is an indexed image, and you should therefore use the function ind2gray to first convert it to a grayscale intensity image. For RGB images the function rgb2gray would do the same.
Next, you need to determine a threshold to use to convert the grayscale image to a binary image. I suggest the function graythresh, which will compute a threshold to plug into im2bw (or the newer imbinarize). Here is how I would accomplish what you are doing in your example:
load trees; % Load the image data
I = ind2gray(X, map); % Convert indexed to grayscale
level = graythresh(I); % Compute an appropriate threshold
BW = im2bw(I, level); % Convert grayscale to binary
And here is what the original image and result BW look like:
For an RGB image input, just replace ind2gray with rgb2gray in the above code.
With regard to resizing your image, that can be done easily with the Image Processing Toolbox function imresize, like so:
smallBW = imresize(BW, [10 10]); % Resize the image to 10-by-10 pixels
It is because gray is in the scale of [0,1], whereas threshold is in [0,256].
This causes lbw to be a big array of false. Here is a modified code that solves the problem:
load trees % from MATLAB
gray=rgb2gray(map); % 'map' is loaded from 'trees'. Convert to grayscale.
threshold=128/256;
lbw=double(gray>threshold);
BW=im2bw(X,lbw); % 'X' is loaded from 'trees'.
imshow(X,map), figure, imshow(BW)
And the result is:

Resources