what is the difference between image vs imagesc in matlab - image

I want to know the difference between imagesc & image in matlab
I used this example to try to figure out the difference beween the two but i couldn't explain the difference in the output images by myself; could you help me with that ?
I = rand(256,256);
for i=1:256
for j=1:256
I(i,j) = j;
end
end
figure('Name','Comparison between image et imagesc')
subplot(2,1,1);image(I);title('using image(I)');
subplot(2,1,2);imagesc(I);title('using imagesc(I)');
figure('Name','gray level of image');
image(I);colormap('gray');
figure('Name','gray level of imagesc');
imagesc(I);colormap('gray');

image displays the input array as an image. When that input is a matrix, by default image has the CDataMapping property set to 'direct'. This means that each value of the input is interpreted directly as an index to a color in the colormap, and out of range values are clipped:
image(C) [...] When C is a 2-dimensional MxN matrix, the elements of C are used as indices into the current colormap to determine the color. The
value of the image object's CDataMapping property determines the
method used to select a colormap entry. For 'direct' CDataMapping (the default), values in C are treated as colormap indices (1-based if double, 0-based if uint8 or uint16).
Since Matlab colormaps have 64 colors by default, in your case this has the effect that values above 64 are clipped. This is what you see in your image graphs.
Specifically, in the first figure the colormap is the default parula with 64 colors; and in the second figure colormap('gray') applies a gray colormap of 64 gray levels. If you try for example colormap(gray(256)) in this figure the image range will match the number of colors, and you'll get the same result as with imagesc.
imagesc is like image but applying automatic scaling, so that the image range spans the full colormap:
imagesc(...) is the same as image(...) except the data is scaled to use the full colormap.
Specifically, imagesc corresponds to image with the CDataMapping property set to 'scaled':
image(C) [...] For 'scaled' CDataMapping, values in C are first scaled according to the axes CLim and then the result is treated as a colormap index.
That's why you don't see any clipping with imagesc.

Related

If we shift the hue by 2*pi/3, what will the R, G, B, histograms change?

If we shift the hue by 2*pi/3, what will the R, G, B, histograms change?
How can I test this? I have access to photoshop, so is there a way to test this and find the answer?
According to HSV into RGB conversion formula (part of it):
Shifting HUE by 120° will swap channel histograms:
+120° : R-->G-->B-->R
-120° : B<--R<--G<--B
To test this in GIMP,- open image histogram in Colors \ Info \ Histogram.
Choose Red,Green or Blue channel to see it's histogram and then open dialog
Colors \ Hue-Saturation and then adjust Hue by +- 120 degrees and see live effect in Histogram window.
I do not think there is an generic answer to this as the result is dependent on the image colors present not just on R,G,B histograms. You need to:
compute histograms
convert RGB to HSV
add hue and clamp it to angular interval
convert back to RGB
compute histograms
I do not use photoshop but I think #1,#2,#4,#5 should be present there. the #3 should be there too (in some filter that manipulates brithness, gama etc) but hard to say if adding to hue will be clamped by only limiting angle or it will handle it as periodic value. In the first case you need to correct the results by:
compute histograms
convert to HSV
clone result A to second image B
add A.hue+=pi/3 and **B.hue-=2*pi/3
the A holds un-clamped colors and B the colors that were clamped in A shifted to the correct hue posititon.
in A recolor all pixels with hue==pi2 with some specified color
the pi2 should be the value your tool clamped hues above pi2 so it can be zero, pi2 or one step less then pi2. This will allow as to ignore clamped values later.
in B recolor all pixels with hue==0 with some specified color
convert A,B to RGB
compute histograms ignoring specified color
merge the A,B histograms
simply add the graph values together.
And now you can compare the histograms to evaluate the change on some sample images.
Anyway you can do all this in any programing language. For example most of the operations needed are present in most image processing and computer vision libs like OpenCV and adding to hue are just 2 nested for loops addition and single if statement like:
for (y=0;y<ys;y++)
for (x=0;x<xs;x++)
{
pixel[y][x].h+=pi2/3.0;
if (pixel[y][x].h>=pi2)
pixel[y][x].h-=pi2;
}
of coarse most HSV pixel formats I used does not use floating values so the hue could be represented for example by 8 bit unsigned integer in which case the code would look like:
for (y=0;y<ys;y++)
for (x=0;x<xs;x++)
pixel[y][x].h=(pixel[y][x].h+(256/3))&255;
If you need to implement the RGB/HSV conversions look here:
RGB value base color name
I think this might interests you:
HSV histogram
Looking at it from a mathematical point of view 2×pi/3 with pi = 3.14 you have 2×pi which is the the "scope" of a circle.
Devided by 3 that means you have a third of a circle or simply 120°

How to add a new colormap on a raster dataset?

I am trying to replicate some ArcGIS functionality in Matlab, specifically the Add Colormap function. The Add Colormap function in ArcGIS associates a .clr file with the TIFF image so that the image has a custom color scheme associated with the TIFF when viewed.
My TIFF images have up to 6 values (1 - 6) in unsigned 8-bit integer format. You can see from the screenshot that some of the images have only 1, 2, or 3 values, while others have 6 values--resulting in variable on-screen color rendering.
I see that Matlab has colormap functionality, however, it appears to be designed only for figures, rather than for TIFF files. How can I associate a colormap with these TIFF images in Matlab so that when I view them (e.g. in ArcGIS), they have a custom color scheme?
As some of the commenters have pointed out, the colormap functionality isn't actually limited to just figures. The colormap concept is really just a lookup table that maps a particular value (index) to specific color (in RGB, typically).
If you check out the documentation for imwrite, you will see that you can actually specify a colormap as the second input to the function.
load mri
im = squeeze(D(:,:,12));
% This is an indexed image (M x N)
% Save without specifying a colormap
imwrite(im, 'nocolormap.tif')
Now to save with a colormap
imwrite(im, heat, 'colormap.tif')
The other alternative, is to create an RGB image within MATLAB and save this image without providing a colormap to imwrite. You can either create this image manually
% Normalize a little bit for display
im = double(im) ./ max(im(:));
output = repmat(im, [1 1 3]); % Make the image (M x N x 3)
imwrite(output, 'rgb_grayscale.tif')
Or you can use the built-in functions gray2rgb or ind2rgb to convert an indexed image to an RGB image using a specific colormap.
rgb_image = gray2rgb(im, jet);
imwrite(rgb_image, 'rgb_jet.tif')
One thing that is pretty important to remember in all of this is that by default, any MATLAB colormap only has 64 colors. So if you need more colors than that, you can specify it when constructing the colormap
size(gray)
64 3
size(gray(1000))
1000 3
This is particularly important if you're trying to display high fidelity data.

To imread Parula image in Matlab without losing resolution

There is no bijection between RGB and Parula, discussed here.
I am thinking how to do well the image processing of files in Parula.
This challenge has been developed from this thread about removing black color from ECG images by extending the case to a generalized problem with Parula colors.
Data:
which is generated by
[X,Y,Z] = peaks(25);
imgParula = surf(X,Y,Z);
view(2);
axis off;
It is not the point of this thread to use this code in your solution to read the second image.
Code:
[imgParula, map, alpha] = imread('http://i.stack.imgur.com/tVMO2.png');
where map is [] and alpha is a completely white image. Doing imshow(imgParula) gives
where you see a lot of interference and lost of resolution because Matlab reads images as RGB, although the actual colormap is Parula.
Resizing this picture does not improve resolution.
How can you read image into corresponding colormap in Matlab?
I did not find any parameter to specify the colormap in reading.
The Problem
There is a one-to-one mapping from indexed colors in the parula colormap to RGB triplets. However, no such one-to-one mapping exists to reverse this process to convert a parula indexed color back to RGB (indeed there are an infinite number ways to do so). Thus, there is no one-to-one correspondence or bijection between the two spaces. The plot below, which shows the R, G, and B values for each parula index, makes this clearer.
This is the case for most indexed colors. Any solution to this problem will be non-unique.
A Built-in Solution
I after playing around with this a bit, I realized that there's already a built-in function that may be sufficient: rgb2ind, which converts RGB image data to indexed image data. This function uses dither (which in turn calls the mex function ditherc) to perform the inverse colormap transformation.
Here's a demonstration that uses JPEG compression to add noise and distort the colors in the original parula index data:
img0 = peaks(32); % Generate sample data
img0 = img0-min(img0(:));
img0 = floor(255*img0./max(img0(:))); % Convert to 0-255
fname = [tempname '.jpg']; % Save file in temp directory
map = parula(256); % Parula colormap
imwrite(img0,map,fname,'Quality',50); % Write data to compressed JPEG
img1 = imread(fname); % Read RGB JPEG file data
img2 = rgb2ind(img1,map,'nodither'); % Convert RGB data to parula colormap
figure;
image(img0); % Original indexed data
colormap(map);
axis image;
figure;
image(img1); % RGB JPEG file data
axis image;
figure;
image(img2); % rgb2ind indexed image data
colormap(map);
axis image;
This should produce images similar to the first three below.
Alternative Solution: Color Difference
Another way to accomplish this task is by comparing the difference between the colors in the RGB image with the RGB values that correspond to each colormap index. The standard way to do this is by calculating ΔE in the CIE L*a*b* color space. I've implemented a form of this in a general function called rgb2map that can be downloaded from my GitHub. This code relies on makecform and applycform in the Image Processing Toolbox to convert from RGB to the 1976 CIE L*a*b* color space.
The following code will produce an image like the one on the right above:
img3 = rgb2map(img1,map);
figure;
image(img3); % rgb2map indexed image data
colormap(map);
axis image;
For each RGB pixel in an input image, rgb2map calculates the color difference between it and every RGB triplet in the input colormap using the CIE 1976 standard. The min function is used to find the index of the minimum ΔE (if more than one minimum value exists, the index of the first is returned). More sophisticated means can be used to select the "best" color in the case of multiple ΔE minima, but they will be more costly.
Conclusions
As a final example, I used an image of the namesake Parula bird to compare the two methods in the figure below. The two results are quite different for this image. If you manually adjust rgb2map to use the more complex CIE 1994 color difference standard, you'll get yet another rendering. However, for images that more closely match the original parula colormap (as above) both should return more similar results. Importantly, rgb2ind benefits from calling mex functions and is almost 100 times faster than rgb2map despite several optimizations in my code (if the CIE 1994 standard is used, it's about 700 times faster).
Lastly, those who want to learn more about colormaps in Matlab, should read this four-part MathWorks blog post by Steve Eddins on the new parula colormap.
Update 6-20-2015: rgb2map code described above updated to use different color space transforms, which improves it's speed by almost a factor of two.

Histogram matching of two Images without using histeq

It is well known that histeq in MATLAB can perform histogram matching so that an image's histogram is transformed to look like another histogram. I am trying to perform this same operation without using histeq. I'm aware that you need to calculate the CDFs between the two images, but I'm not sure what to do next. What do I do?
Histogram matching is concerned with transforming one image's histogram so that it looks like another. The basic principle is to compute the histogram of each image individually, then compute their discrete cumulative distribution functions (CDFs). Let's denote the CDF of first image as while the CDF of the second image is . Therefore, would denote what the CDF value is for intensity x for the first image.
Once you calculate the CDFs for each of the images, you need to compute a mapping that transforms one intensity from the first image so that it is in agreement with the intensity distribution of the second image. To do this, for each intensity in the first image - let's call this which will be from [0,255] assuming an 8-bit image - we must find an intensity in the second image (also in the range of [0,255]) such that:
There may be a case where we won't get exactly an equality, so what you would need to do is find the smallest absolute difference between and . In other words, for a mapping M, for each entry of , we must find an intensity such that:
You would do this for all 256 values, and we would produce a mapping. Once you find this mapping, you simply have to apply this mapping on the first image to get it to look like the intensity distribution of the second image. A rough (and perhaps inefficient) algorithm would look something like this. Let im1 be the first image (of type uint8) while im2 is the second image (of type uint8):
M = zeros(256,1,'uint8'); %// Store mapping - Cast to uint8 to respect data type
hist1 = imhist(im1); %// Compute histograms
hist2 = imhist(im2);
cdf1 = cumsum(hist1) / numel(im1); %// Compute CDFs
cdf2 = cumsum(hist2) / numel(im2);
%// Compute the mapping
for idx = 1 : 256
[~,ind] = min(abs(cdf1(idx) - cdf2));
M(idx) = ind-1;
end
%// Now apply the mapping to get first image to make
%// the image look like the distribution of the second image
out = M(double(im1)+1);
out should contain your matched image where it transforms the intensity distribution of the first image to match that of the second image. Take special care of the out statement. The intensity range of im1 spans between [0,255], but MATLAB's indexing for arrays starts at 1. Therefore, we need to add 1 to every value of im1 so we can properly index into M to produce our output. However, im1 is of type uint8, and MATLAB saturates values should you try and go beyond 255. As such, to ensure that we get to 256, we must cast to a data type that is beyond 8-bit precision. I decided to use double, then when we add 1 to every value in im1, we will span between 1 to 256 so we can properly index into M. Also take not that when I find the location that minimizes the difference, I also must subtract by 1 as the data type spans from [0,255].

imshow() not showing changed pixel values

I converted a RGB iamge to gray and then to binary. I wanted that the white pixels of binary image be replaced by pixel value of gray image. Though the command window shows that all 1s are replaced with gray pixel value but same is not reflected in the image.
The binary image (bw) and the new image (newbw) looks exactly. Why so ?
clc;clear all;close all;
i = imread('C:\Users\asus\Documents\Academics 2014 (SEM 7)\DIP\matlabTask\im1.jpg');
igray = rgb2gray(i);
bw = im2bw(igray);
[m,n]=size(bw);
newbw = zeros(m,n);
for i=1:m
for j=1:n
if bw(i,j)==1
newbw(i,j)=igray(i,j);
else
newbw(i,j)=bw(i,j);
end
end
end
subplot(311),imshow(igray),subplot(312),imshow(bw),subplot(313),imshow(newbw)
The reason why is because when you are creating your new blank image, this is automatically created as double type. When doing imshow, if you provide a double type image, the dynamic range of the pixel intensities are expected to be between [0,1] where 0 is black and 1 is white. Anything less than 0 (negative) will be shown as black, and anything greater than 1 will be shown as white.
Because this is surely not the case in your image, and a lot of the values are going to be > 1, you will get an output of either black or white. I suspect your image is of type uint8, and so the dynamic range is between [0,255].
As such, what you need to do is cast your output image so that it is of the same type as your input image. Once you do this, you should be able to see the gray values displayed properly. All you have to do now is simply change your newbw statement so that the variable is of the same class as the input image. In other words:
newbw = zeros(m,n,class(igray));
Your code should now work. FWIW, you're not the first one to encounter this problem. Almost all of my questions that I answer when using imshow are due to the fact that people forget that putting in an image of type double and type uint* behave differently.
Minor note
For efficiency purposes, I personally would not use a for loop. You can achieve the above behaviour by using indexing with your boolean array. As such, I would replace your for loop with this statement:
newbw(bw) = igray(bw);
Whichever locations in bw are true or logical 1, you copy those locations from igray over to newbw.

Resources