Comparing 2 images intensities - image

I have two images 1 and 2. I want to get the v (intensity) value of the hsv images; then I want the v (intensity) value of the first image equal to v (intesity) value of the second image?
I used this code to get the v
v = image1(:, :, 3);
u = image2(:, :, 3);
How do I make both u and v the same value?
Thanks,

It definitely sounds like you want to do some kind of histogram equalization. As a first attempt you can try Matlab's histeq function. (You should also read the documentation for imhist.) To make the intensity values in image1 more closely match those in image2 (they will almost certainly never become identical) you would do something like this:
v = image1(:, :, 3);
u = image2(:, :, 3);
targetHist = imhist(u);
newV = histeq(v, targetHist);
newImage1 = cat(3, image1(:, :, 1), image1(:, :, 2), newV);
% or, alternatively %
image1(:, :, 3) = newV;
If this technique doesn't give you the results you need, there are other methods of histogram equalization that you can use, including adaptive techniques that equalize intensities by regions.
Edit:
Here's a little about what the code is doing. For more information you can look at the Algorithm section of the link to histeq I gave above.
Given a reference image (in this case image2) in HSV format, we're taking the Value channel (or Intensity, or Brightness channel) and using imhist to divide the intensity levels in the image into 256 bins (by default), with each bin containing the number of pixels that have that intensity value.
histeq in this usage actually does histogram matching. It calculates the histogram for the second image and tries to "match" the input histogram. For instance, if the histogram has all of the bins empty except for bin 100, which has 300 pixels, and your image has all of the bins empty except for bin 80, which has 300 pixels, histeq will increase the intensity of each pixel in the image by 20. A very simplistic example, but hopefully you get the idea.
It would be very helpful to plot the 3 histograms for image1, image2 and the equalized histogram to see how the intensity levels for image1 are changed.

Related

How to convert cv2.addWeighted and cv2.GaussianBlur into MATLAB?

I have this Python code:
cv2.addWeighted(src1, 4, cv2.GaussianBlur(src1, (0, 0), 10), src2, -4, 128)
How can I convert it to Matlab? So far I got this:
f = imread0('X.jpg');
g = imfilter(f, fspecial('gaussian',[size(f,1),size(f,2)],10));
alpha = 4;
beta = -4;
f1 = f*alpha+g*beta+128;
I want to subtract local mean color image.
Input image:
Blending output from OpenCV:
The documentation for cv2.addWeighted has the definition such that:
cv2.addWeighted(src1, alpha, src2, beta, gamma[, dst[, dtype]]) → dst
Also, the operations performed on the output image is such that:
(source: opencv.org)
Therefore, what your code is doing is exactly correct... at least for cv2.addWeighted. You take alpha, multiply this by the first image, then beta, multiply this by the second image, then add gamma on top of this. The only intricacy left to deal with is saturate, which means that any values that are beyond the dynamic range of the data type you are dealing with, you cap it at that much. Because there is a potential for negatives to occur in the result, the saturate option simply means to make any values that are negative 0 and any values that are greater than the maximum expected to that max. In this case, you'll want to make any values larger than 1 equal to 1. As such, it'll be a good idea to convert your image to double through im2double because you want to allow the addition and subtraction of values beyond the dynamic range to happen first, then you saturate after. By using the default image precision of the image (which is uint8), the saturation will happen even before the saturate operation occurs, and that'll give you the wrong results. Because you're doing this double conversion, you'll want to convert the addition of 128 for your gamma to 0.5 to compensate.
Now, the only slight problem is your Gaussian Blur. Looking at the documentation, by doing cv2.GaussianBlur(src1, (0, 0), 10), you are telling OpenCV to infer on the mask size while the standard deviation is 10. MATLAB does not infer the size of the mask for you, so you need to do this yourself. A common practice is to simply find six-times the standard deviation, take the floor and add 1. This is for both the horizontal and vertical dimensions of the mask. You can see my post here on the justification as to why this is common practice: By which measures should I set the size of my Gaussian filter in MATLAB?
Therefore, in MATLAB, you would do this with your Gaussian blur instead. BTW, it's simply imread, not imread0:
f = im2double(imread('http://i.stack.imgur.com/kl3Md.jpg')); %// Change - Reading image directly from StackOverflow
sigma = 10; %// Change
sz = 1 + floor(6*sigma); %// Change
g = imfilter(f, fspecial('gaussian', sz, sigma)); %// Change
%// Rest of the code is the same
alpha = 4;
beta = -4;
f1 = f*alpha+g*beta+0.5; %// Change
%// Saturate
f1(f1 > 1) = 1;
f1(f1 < 0) = 0;
I get this image:
Take a note that there is a slight difference in the way this appears between OpenCV in MATLAB... especially the hallowing around the eye. This is because OpenCV does something different when inferring the mask size for the Gaussian blur. This I'm not sure what is going on, but how I specified the mask size by looking at the standard deviation is one of the most common heuristics for it. Play around with the standard deviation until you get something you like.

Are there MATLAB scripts that will give me a quantitative/visual depiction of all the colors in an image?

I have tried the following:
b=imread('/home2/s163720/lebron.jpg');
hsv = rgb2hsv(b);
h = hsv(:,:,1);
imhist(h,16)
However, it does not give me quite what I'm looking for. It would be great to see a counter of some sort for different hues, or maybe even a distribution of the colors.
This would be greatly appreciated.
I think this may be on the line of what you're looking for.
%Read the image
img = imread('/home2/s163720/lebron.jpg');
%convert to hsv and reshap to a N x 3 matrix
hsv = rgb2hsv(img);
hsv2 = squeeze(reshape(hsv, [], 1, 3));
%Extract hue (and convert to an angle) and saturation
Hue = 2*pi*hsv2(:,1);
Saturation = hsv2(:,2);
%The number of bins in the hue and saturation directions
RadialColorBins = 50;
AngularColorBins = 50;
%Where the edged of the bins are
edges = {linspace(0, 1, RadialColorBins), linspace(0, 2*pi, AngularColorBins) - 2*pi / AngularColorBins};
%bin the data
[heights,centers] = hist3([Saturation, Hue],'Edges',edges);
%Extract the centers
radius = centers{1};
angle = centers{2};
%Force periodicity
angle = [angle, angle(1)];
heights = [heights, heights(:, 1)]';
%Mesh the r and theta components
[Radius, Angle] = meshgrid(radius, angle);
%Make a color map for the polar plot
CMap = hsv2rgb(Angle/(2*pi), Radius, ones(size(Radius)));
figure(1)
imshow(img)
%polar histogram in s-h space
figure(2)
surf(Radius.*cos(Angle), Radius.*sin(Angle), heights, CMap,'EdgeColor','none');
I know this is supposed to be where the answers should be, but I am quite sure that there is not a generic answer for your question. However there are a couple of possibilities, maybe you or anyone else can come up with more.
So the basic problem as you want a histogram, is that you have to choose some representation of the color as a single number. Which is quite a difficult problem.
The first solution could be to transform the rgb color to wavelength, and then ignore the intensity of the image. The problem by using this idea is that the rgb colors defines more colors than on wavelength alone. see: http://jp.mathworks.com/matlabcentral/newsreader/view_thread/313712
A second solution could be to define a number as A = sum(rgb.*[1,10,100]); and then use this number as your representation of the color.
A third solution would be to transform the hexadecimal representation of the number to a decimal representation and then use this number.
Once you have a representation for every color of every pixel, you simply reshape the matrix into a vector and use the standard hist command to plot it. But as mentioned, mayby someone has a better idea for a representation of the color as a single number.

To convert only black color to white in Matlab

I know this thread about converting black color to white and white to black simultaneously.
I would like to convert only black to white.
I know this thread about doing this what I am asking but I do not understand what goes wrong.
Picture
Code
rgbImage = imread('ecg.png');
grayImage = rgb2gray(rgbImage); % for non-indexed images
level = graythresh(grayImage); % threshold for converting image to binary,
binaryImage = im2bw(grayImage, level);
% Extract the individual red, green, and blue color channels.
redChannel = rgbImage(:, :, 1);
greenChannel = rgbImage(:, :, 2);
blueChannel = rgbImage(:, :, 3);
% Make the black parts pure red.
redChannel(~binaryImage) = 255;
greenChannel(~binaryImage) = 0;
blueChannel(~binaryImage) = 0;
% Now recombine to form the output image.
rgbImageOut = cat(3, redChannel, greenChannel, blueChannel);
imshow(rgbImageOut);
Which gives
Where seems to be something wrong in red color channel.
The Black color is just (0,0,0) in RGB so its removal should mean to turn every (0,0,0) pixel to white (255,255,255).
Doing this idea with
redChannel(~binaryImage) = 255;
greenChannel(~binaryImage) = 255;
blueChannel(~binaryImage) = 255;
Gives
So I must have misunderstood something in Matlab. The blue color should not have any black. So this last image is strange.
How can you turn only black color to white?
I want to keep the blue color of the ECG.
If I understand you properly, you want to extract out the blue ECG plot while removing the text and axes. The best way to do that would be to examine the HSV colour space of the image. The HSV colour space is great for discerning colours just like the way humans do. We can clearly see that there are two distinct colours in the image.
We can convert the image to HSV using rgb2hsv and we can examine the components separately. The hue component represents the dominant colour of the pixel, the saturation denotes the purity or how much white light there is in the pixel and the value represents the intensity or strength of the pixel.
Try visualizing each channel doing:
im = imread('http://i.stack.imgur.com/cFOSp.png'); %// Read in your image
hsv = rgb2hsv(im);
figure;
subplot(1,3,1); imshow(hsv(:,:,1)); title('Hue');
subplot(1,3,2); imshow(hsv(:,:,2)); title('Saturation');
subplot(1,3,3); imshow(hsv(:,:,3)); title('Value');
Hmm... well the hue and saturation don't help us at all. It's telling us the dominant colour and saturation are the same... but what sets them apart is the value. If you take a look at the image on the right, we can tell them apart by the strength of the colour itself. So what it's telling us is that the "black" pixels are actually blue but with almost no strength associated to it.
We can actually use this to our advantage. Any pixels whose values are above a certain value are the values we want to keep.
Try setting a threshold... something like 0.75. MATLAB's dynamic range of the HSV values are from [0-1], so:
mask = hsv(:,:,3) > 0.75;
When we threshold the value component, this is what we get:
There's obviously a bit of quantization noise... especially around the axes and font. What I'm going to do next is perform a morphological erosion so that I can eliminate the quantization noise that's around each of the numbers and the axes. I'm going to make it the mask a bit large to ensure that I remove this noise. Using the image processing toolbox:
se = strel('square', 5);
mask_erode = imerode(mask, se);
We get this:
Great, so what I'm going to do now is make a copy of your original image, then set any pixel that is black from the mask I derived (above) to white in the final image. All of the other pixels should remain intact. This way, we can remove any text and the axes seen in your image:
im_final = im;
mask_final = repmat(mask_erode, [1 1 3]);
im_final(~mask_final) = 255;
I need to replicate the mask in the third dimension because this is a colour image and I need to set each channel to 255 simultaneously in the same spatial locations.
When I do that, this is what I get:
Now you'll notice that there are gaps in the graph.... which is to be expected due to quantization noise. We can do something further by converting this image to grayscale and thresholding the image, then filling joining the edges together by a morphological dilation. This is safe because we have already eliminated the axies and text. We can then use this as a mask to index into the original image to obtain our final graph.
Something like this:
im2 = rgb2gray(im_final);
thresh = im2 < 200;
se = strel('line', 10, 90);
im_dilate = imdilate(thresh, se);
mask2 = repmat(im_dilate, [1 1 3]);
im_final_final = 255*ones(size(im), class(im));
im_final_final(mask2) = im(mask2);
I threshold the previous image that we got without the text and axes after I convert it to grayscale, and then I perform dilation with a line structuring element that is 90 degrees in order to connect those lines that were originally disconnected. This thresholded image will contain the pixels that we ultimately need to sample from the original image so that we can get the graph data we need.
I then take this mask, replicate it, make a completely white image and then sample from the original image and place the locations we want from the original image in the white image.
This is our final image:
Very nice! I had to do all of that image processing because your image basically has quantization noise to begin with, so it's going to be a bit harder to get the graph entirely. Ander Biguri in his answer explained in more detail about colour quantization noise so certainly check out his post for more details.
However, as a qualitative measure, we can subtract this image from the original image and see what is remaining:
imshow(rgb2gray(abs(double(im) - double(im_final_final))));
We get:
So it looks like the axes and text are removed fine, but there are some traces in the graph that we didn't capture from the original image and that makes sense. It all has to do with the proper thresholds you want to select in order to get the graph data. There are some trouble spots near the beginning of the graph, and that's probably due to the morphological processing that I did. This image you provided is quite tricky with the quantization noise, so it's going to be very difficult to get a perfect result. Also, these thresholds unfortunately are all heuristic, so play around with the thresholds until you get something that agrees with you.
Good luck!
What's the problem?
You want to detect all black parts of the image, but they are not really black
Example:
Your idea (or your code):
You first binarize the image, selecting the pixels that ARE something against the pixels that are not. In short, you do: if pixel>level; pixel is something
Therefore there is a small misconception you have here! when you write
% Make the black parts pure red.
it should read
% Make every pixel that is something (not background) pure red.
Therefore, when you do
redChannel(~binaryImage) = 255;
greenChannel(~binaryImage) = 255;
blueChannel(~binaryImage) = 255;
You are doing
% Make every pixel that is something (not background) white
% (or what it is the same in this case, delete them).
Therefore what you should get is a completely white image. The image is not completely white because there has been some pixels that were labelled as "not something, part of the background" by the value of level, in case of your image around 0.6.
A solution that one could think of is manually setting the level to 0.05 or similar, so only black pixels will be selected in the gray to binary threholding. But this will not work 100%, as you can see, the numbers have some very "no-black" values.
How would I try to solve the problem:
I would try to find the colour you want, extract just that colour from the image, and then delete outliers.
Extract blue using HSV (I believe I answered you somewhere else how to use HSV).
rgbImage = imread('ecg.png');
hsvImage=rgb2hsv(rgbImage);
I=rgbImage;
R=I(:,:,1);
G=I(:,:,2);
B=I(:,:,3);
th=0.1;
R((hsvImage(:,:,1)>(280/360))|(hsvImage(:,:,1)<(200/360)))=255;
G((hsvImage(:,:,1)>(280/360))|(hsvImage(:,:,1)<(200/360)))=255;
B((hsvImage(:,:,1)>(280/360))|(hsvImage(:,:,1)<(200/360)))=255;
I2= cat(3, R, G, B);
imshow(I2)
Once here we would like to get the biggest blue part, and that would be our signal. Therefore the best approach seems to first binarize the image taking all blue pixels
% Binarize image, getting all the pixels that are "blue"
bw=im2bw(rgb2gray(I2),0.9999);
And then using bwlabel, label all the independent pixel "islands".
% Label each "blob"
lbl=bwlabel(~bw);
The label most repeated will be the signal. So we find it and separate the background from the signal using that label.
% Find the blob with the highes amount of data. That will be your signal.
r=histc(lbl(:),1:max(lbl(:)));
[~,idxmax]=max(r);
% Profit!
signal=rgbImage;
signal(repmat((lbl~=idxmax),[1 1 3]))=255;
background=rgbImage;
background(repmat((lbl==idxmax),[1 1 3]))=255;
Here there is a plot with the signal, background and difference (using the same equation as #rayryang used)
Here is a variation on #rayryeng's solution to extract the blue signal:
%// retrieve picture
imgRGB = imread('http://i.stack.imgur.com/cFOSp.png');
%// detect axis lines and labels
imgHSV = rgb2hsv(imgRGB);
BW = (imgHSV(:,:,3) < 1);
BW = imclose(imclose(BW, strel('line',40,0)), strel('line',10,90));
%// clear those masked pixels by setting them to background white color
imgRGB2 = imgRGB;
imgRGB2(repmat(BW,[1 1 3])) = 255;
%// show extracted signal
imshow(imgRGB2)
To get a better view, here is the detected mask overlayed on top of the original image (I'm using imoverlay function from the File Exchange):
figure
imshow(imoverlay(imgRGB, BW, uint8([255,0,0])))
Here is a code for this:
rgbImage = imread('ecg.png');
redChannel = rgbImage(:, :, 1);
greenChannel = rgbImage(:, :, 2);
blueChannel = rgbImage(:, :, 3);
black = ~redChannel&~greenChannel&~blueChannel;
redChannel(black) = 255;
greenChannel(black) = 255;
blueChannel(black) = 255;
rgbImageOut = cat(3, redChannel, greenChannel, blueChannel);
imshow(rgbImageOut);
black is the area containing the black pixels. These pixels are set to white in each color channel.
In your code you use a threshold and a grayscale image so of course you have much bigger area of pixels that is set to white resp. red color. In this code only pixel that contain absolutly no red, green and blue are set to white.
The following code does the same with a threshold for each color channel:
rgbImage = imread('ecg.png');
redChannel = rgbImage(:, :, 1);
greenChannel = rgbImage(:, :, 2);
blueChannel = rgbImage(:, :, 3);
black = (redChannel<150)&(greenChannel<150)&(blueChannel<150);
redChannel(black) = 255;
greenChannel(black) = 255;
blueChannel(black) = 255;
rgbImageOut = cat(3, redChannel, greenChannel, blueChannel);
imshow(rgbImageOut);

How to add a Gaussian shaped object to an image?

I am interested in adding a single Gaussian shaped object to an existing image, something like in the attached image. The base image that I would like to add the object to is 8-bit unsigned with values ranging from 0-255. The bright object in the attached image is actually a tree represented by normalized difference vegetation index (NDVI) data. The attached script is what I have have so far. How can I add a a Gaussian shaped abject (i.e. a tree) with values ranging from 110-155 to an existing NDVI image?
Sample data available here which can be used with this script to calculate NDVI
file = 'F:\path\to\fourband\image.tif';
[I R] = geotiffread(file);
outputdir = 'F:\path\to\output\directory\'
%% Make NDVI calculations
NIR = im2single(I(:,:,4));
red = im2single(I(:,:,1));
ndvi = (NIR - red) ./ (NIR + red);
ndvi = double(ndvi);
%% Stretch NDVI to 0-255 and convert to 8-bit unsigned integer
ndvi = floor((ndvi + 1) * 128); % [-1 1] -> [0 256]
ndvi(ndvi < 0) = 0; % not really necessary, just in case & for symmetry
ndvi(ndvi > 255) = 255; % in case the original value was exactly 1
ndvi = uint8(ndvi); % change data type from double to uint8
%% Need to add a random tree in the image here
%% Write to geotiff
tiffdata = geotiffinfo(file);
outfilename = [outputdir 'ndvi_' '.tif'];
geotiffwrite(outfilename, ndvi, R, 'GeoKeyDirectoryTag', tiffdata.GeoTIFFTags.GeoKeyDirectoryTag)
Your post is asking how to do three things:
How do we generate a Gaussian shaped object?
How can we do this so that the values range between 110 - 155?
How do we place this in our image?
Let's answer each one separately, where the order of each question builds on the knowledge from the previous questions.
How do we generate a Gaussian shaped object?
You can use fspecial from the Image Processing Toolbox to generate a Gaussian for you:
mask = fspecial('gaussian', hsize, sigma);
hsize specifies the size of your Gaussian. You have not specified it here in your question, so I'm assuming you will want to play around with this yourself. This will produce a hsize x hsize Gaussian matrix. sigma is the standard deviation of your Gaussian distribution. Again, you have also not specified what this is. sigma and hsize go hand-in-hand. Referring to my previous post on how to determine sigma, it is generally a good rule to set the standard deviation of your mask to be set to the 3-sigma rule. As such, once you set hsize, you can calculate sigma to be:
sigma = (hsize-1) / 6;
As such, figure out what hsize is, then calculate your sigma. After, invoke fspecial like I did above. It's generally a good idea to make hsize an odd integer. The reason why is because when we finally place this in your image, the syntax to do this will allow your mask to be symmetrically placed. I'll talk about this when we get to the last question.
How can we do this so that the values range between 110 - 155?
We can do this by adjusting the values within mask so that the minimum is 110 while the maximum is 155. This can be done by:
%// Adjust so that values are between 0 and 1
maskAdjust = (mask - min(mask(:))) / (max(mask(:)) - min(mask(:)));
%//Scale by 45 so the range goes between 0 and 45
%//Cast to uint8 to make this compatible for your image
maskAdjust = uint8(45*maskAdjust);
%// Add 110 to every value to range goes between 110 - 155
maskAdjust = maskAdjust + 110;
In general, if you want to adjust the values within your Gaussian mask so that it goes from [a,b], you would normalize between 0 and 1 first, then do:
maskAdjust = uint8((b-a)*maskAdjust) + a;
You'll notice that we cast this mask to uint8. The reason we do this is to make the mask compatible to be placed in your image.
How do we place this in our image?
All you have to do is figure out the row and column you would like the centre of the Gaussian mask to be placed. Let's assume these variables are stored in row and col. As such, assuming you want to place this in ndvi, all you have to do is the following:
hsizeHalf = floor(hsize/2); %// hsize being odd is important
%// Place Gaussian shape in our image
ndvi(row - hsizeHalf : row + hsizeHalf, col - hsizeHalf : col + hsizeHalf) = maskAdjust;
The reason why hsize should be odd is to allow an even placement of the shape in the image. For example, if the mask size is 5 x 5, then the above syntax for ndvi simplifies to:
ndvi(row-2:row+2, col-2:col+2) = maskAdjust;
From the centre of the mask, it stretches 2 rows above and 2 rows below. The columns stretch from 2 columns to the left to 2 columns to the right. If the mask size was even, then we would have an ambiguous choice on how we should place the mask. If the mask size was 4 x 4 as an example, should we choose the second row, or third row as the centre axis? As such, to simplify things, make sure that the size of your mask is odd, or mod(hsize,2) == 1.
This should hopefully and adequately answer your questions. Good luck!

converting rgb image to double and keeping the values in matlab

I have an image and I want to import this image to matlab. I am using the following code. The problem that I have is that when I convert the image to grayscale, everything will be changed and the converted image is not similar to original one. In another words, I want to keep the values (or let say the image) as it is in the original image. Is there any way for doing this?
I = imread('myimage.png');
figure, imagesc(I), axis equal tight xy
I2 = rgb2gray(I);
figure, imagesc(I2), axis equal tight xy
Your original image is already using a jet colormap. The problem is, when you convert it to grayscale, you lose some crucial information. See the image below.
In the original image you have a heatmap. Blue areas generally indicate "low value", whereas red areas indicate "high values". But when converted to grayscale, both areas indicate low value, as they aproach dark pixels (see the arrows).
A possible solution is this:
You take every pixel of your image, find the nearest (closest)
color value in the jet colormap and use its index as a gray value.
I will show you first the final code and the results. The explanation goes below:
I = im2double(imread('myimage.png'));
map = jet(256);
Irgb = reshape(I, size(I, 1) * size(I, 2), 3);
Igray = zeros(size(I, 1), size(I, 2), 'uint8');
for ii = 1:size(Irgb, 1)
[~, idx] = min(sum((bsxfun(#minus, Irgb(ii, :), map)) .^ 2, 2));
Igray(ii) = idx - 1;
end
clear Irgb;
subplot(2,1,1), imagesc(I), axis equal tight xy
subplot(2,1,2), imagesc(Igray), axis equal tight xy
Result:
>> whos I Igray
Name Size Bytes Class Attributes
I 110x339x3 894960 double
Igray 110x339 37290 uint8
Explanation:
First, you get the jet colormap, like this:
map = jet(256);
It will return a 256x3 colormap with the possible colors on the jet palette, where each row is a RGB pixel. map(1,:) would be kind of a dark blue, and map(256,:) would be kind of a dark red, as expected.
Then, you do this:
Irgb = reshape(I, size(I, 1) * size(I, 2), 3);
... to turn your 110x339x3 image into a 37290x3 matrix, where each row is a RGB pixel.
Now, for each pixel, you take the Euclidean distance of that pixel to the map pixels. You take the index of the nearest one and use it as a gray value. The minus one (-1) is because the index is in the range 1..256, but a gray value is in the range 0..255.
Note: the Euclidean distance takes a square root at the end, but since we are just trying to find the closest value, there is no need to do so.
EDIT:
Here is a 10x faster version of the code:
I = im2double(imread('myimage.png'));
map = jet(256);
[C, ~, IC] = unique(reshape(I, size(I, 1) * size(I, 2), 3), 'rows');
equiv = zeros(size(C, 1), 1, 'uint8');
for ii = 1:numel(equiv)
[~, idx] = min(sum((bsxfun(#minus, C(ii, :), map)) .^ 2, 2));
equiv(ii) = idx - 1;
end
Irgb = reshape(equiv(IC), size(I, 1), size(I, 2));
Irgb = Irgb(end:-1:1,:);
clear equiv C IC;
It runs faster because it exploits the fact that the colors on your image are restricted to the colors in the jet palette. Then, it counts the unique colors and only match them to the palette values. With fewer pixels to match, the algorithm runs much faster. Here are the times:
Before:
Elapsed time is 0.619049 seconds.
After:
Elapsed time is 0.061778 seconds.
In the second image, you're using the default colormap, i.e. jet. If you want grayscale, then try using colormap(gray).

Resources