To imread Parula image in Matlab without losing resolution - image

There is no bijection between RGB and Parula, discussed here.
I am thinking how to do well the image processing of files in Parula.
This challenge has been developed from this thread about removing black color from ECG images by extending the case to a generalized problem with Parula colors.
Data:
which is generated by
[X,Y,Z] = peaks(25);
imgParula = surf(X,Y,Z);
view(2);
axis off;
It is not the point of this thread to use this code in your solution to read the second image.
Code:
[imgParula, map, alpha] = imread('http://i.stack.imgur.com/tVMO2.png');
where map is [] and alpha is a completely white image. Doing imshow(imgParula) gives
where you see a lot of interference and lost of resolution because Matlab reads images as RGB, although the actual colormap is Parula.
Resizing this picture does not improve resolution.
How can you read image into corresponding colormap in Matlab?
I did not find any parameter to specify the colormap in reading.

The Problem
There is a one-to-one mapping from indexed colors in the parula colormap to RGB triplets. However, no such one-to-one mapping exists to reverse this process to convert a parula indexed color back to RGB (indeed there are an infinite number ways to do so). Thus, there is no one-to-one correspondence or bijection between the two spaces. The plot below, which shows the R, G, and B values for each parula index, makes this clearer.
This is the case for most indexed colors. Any solution to this problem will be non-unique.
A Built-in Solution
I after playing around with this a bit, I realized that there's already a built-in function that may be sufficient: rgb2ind, which converts RGB image data to indexed image data. This function uses dither (which in turn calls the mex function ditherc) to perform the inverse colormap transformation.
Here's a demonstration that uses JPEG compression to add noise and distort the colors in the original parula index data:
img0 = peaks(32); % Generate sample data
img0 = img0-min(img0(:));
img0 = floor(255*img0./max(img0(:))); % Convert to 0-255
fname = [tempname '.jpg']; % Save file in temp directory
map = parula(256); % Parula colormap
imwrite(img0,map,fname,'Quality',50); % Write data to compressed JPEG
img1 = imread(fname); % Read RGB JPEG file data
img2 = rgb2ind(img1,map,'nodither'); % Convert RGB data to parula colormap
figure;
image(img0); % Original indexed data
colormap(map);
axis image;
figure;
image(img1); % RGB JPEG file data
axis image;
figure;
image(img2); % rgb2ind indexed image data
colormap(map);
axis image;
This should produce images similar to the first three below.
Alternative Solution: Color Difference
Another way to accomplish this task is by comparing the difference between the colors in the RGB image with the RGB values that correspond to each colormap index. The standard way to do this is by calculating ΔE in the CIE L*a*b* color space. I've implemented a form of this in a general function called rgb2map that can be downloaded from my GitHub. This code relies on makecform and applycform in the Image Processing Toolbox to convert from RGB to the 1976 CIE L*a*b* color space.
The following code will produce an image like the one on the right above:
img3 = rgb2map(img1,map);
figure;
image(img3); % rgb2map indexed image data
colormap(map);
axis image;
For each RGB pixel in an input image, rgb2map calculates the color difference between it and every RGB triplet in the input colormap using the CIE 1976 standard. The min function is used to find the index of the minimum ΔE (if more than one minimum value exists, the index of the first is returned). More sophisticated means can be used to select the "best" color in the case of multiple ΔE minima, but they will be more costly.
Conclusions
As a final example, I used an image of the namesake Parula bird to compare the two methods in the figure below. The two results are quite different for this image. If you manually adjust rgb2map to use the more complex CIE 1994 color difference standard, you'll get yet another rendering. However, for images that more closely match the original parula colormap (as above) both should return more similar results. Importantly, rgb2ind benefits from calling mex functions and is almost 100 times faster than rgb2map despite several optimizations in my code (if the CIE 1994 standard is used, it's about 700 times faster).
Lastly, those who want to learn more about colormaps in Matlab, should read this four-part MathWorks blog post by Steve Eddins on the new parula colormap.
Update 6-20-2015: rgb2map code described above updated to use different color space transforms, which improves it's speed by almost a factor of two.

Related

A proper way to convert 2D Array into RGB or GrayScale image for precision difference

I have a 2D CNN model where I perform a classification task. My images are all coming from a sensor data after conversion.
So, normally, my way is to convert them into images using the following approach
newsize = (9, 1000)
pic = acc_normalized[0]
img = Image.fromarray(np.uint8(pic*255), 'L')
img = img.resize(newsize)
image_path = "Images_Accel"
image_name = "D1." + str(2)
img.save(f"{image_path}/{image_name}.jpeg")
This is what I obtain:
However, their precision is sort of important. For instance, some of the numerical values are like:
117.79348187327987 or 117.76568758022673.
As you see in the above line, their difference is the digits, when I use uint8, it only takes 117 to when converting it into image pixels and it looks the same, right? But, I'd like to make them different. In some cases, the difference is even at the 8th or 10th digit.
So, when I try to use mode F and save them .jpeg in Image.fromarray line it gives me error and says that PIL cannot write mode F to jpeg.
Then, I tried to first convert them RGB like following;
img = Image.fromarray(pic, 'RGB')
I am not including np.float32 just before pic or not multiplying it by 255 as it is. Then, I convert this image to grayscale. This is what I got for RGB image;
After converting RGB into grayscale:
As you see, it seems that there is a critical different between the first pic and the last pic. So, what should be the proper way to use them in 2D CNN classification? or, should I convert them into RGB and choose grayscale in CNN implementation and a channel of 1? My image dimensions 1000x9. I can even change this dimension like 250x36 or 100x90. It doesn't matter too much. By the way, in the CNN network, I am able to get more than 90% test accuracy when I use the first-type of image.
The main problem here is using which image conversion method I'll be able to take into account those precision differences across the pixels. Would you give me some idea?
---- EDIT -----
Using .tiff format I made some quick comparisons.
First of all, my data looks like the following;
So, if I convert this first reading into an image using the following code where I use np.float64 and L gives me a grayscale image;
newsize = (9, 1000)
pic = acc_normalized[0]
img = Image.fromarray(np.float64(pic), 'L')
img = img.resize(newsize)
image_path = "Images_Accel"
image_name = "D1." + str(2)
img.save(f"{image_path}/{image_name}.tiff")
It gives me this image;
Then, the first 15x9 matrix seems like following image; The contradiction is that if you take a closer look at the numerical array, for instance (1,4) member, it's a complete black where the numerical array is equal to 0.4326132099074307. For grayscale images, black means that it's close to 0 cause it makes white if it's close to 1. However, if it's making a row operation, there is another value closer to 0 and I was expecting to see it black at (1,5) location. If it does a column operation, there is again something wrong. As I said, this data has been already normalized and varies within 0 and 1. So, what's the logic that it converts the array into an image? What kind of operation it does?
Secondly, if I first get an RGB image of the data and then convert it into a grayscale image, why I am not having exactly the same image as what I obtained first? Should the image coming from direct grayscale conversion (L method, np.float64) and the one coming from RGB-based (first I get RGB then convert it to grayscale) be the same? There is a difference in black-white pixels in those images. I do not know why we have it.
---- EDIT 2 ----
.tiff image with F mode and np.float32 gives the following;
I don't really understand your question, but you seem to want to store image differences that are less than 1, i.e. less than the resolution of integer values.
To do so, you need to use an image format that can store floats. JPEG, PNG, GIF, TGA and BMP cannot store floats. Instead, use TIFF, EXR or PFM formats which can handle floats.
Alternatively, you can create 16-bit PNG images wherein each pixel can store values in range 0..65535. So, say the maximum difference you wanted to store was 60 you could calculate the difference and multiply it by 1000 and round it to make an integer in range 0..60000 and store as 16-bit PNG.
You could record the scale factor as a comment within the image if it is variable.

Convert RGB to color scale defined by any two colors [duplicate]

I am doing some image processing and I needed to reduce the number of colors of an image. I found that rgb2ind could do that and wrote the following snippet:
clc
clear all
[X,map] = rgb2ind(RGB,6,'nodither');
X = rgb2ind(RGB, map);
rgb=ind2rgb(X,map);
rgb_uint8=uint8(rgb*255+0.5);
imshow(rgb_uint8);
But the output looks like this and I doubt there are only 6 colors in it.
It may perceptually look like there is more than 6 colours, but there is truly 6 colours. If you take a look at your map variable, it will be a 6 x 3 matrix. Each row contains a colour that you want to quantize your image to.
To double check, convert this image into a grayscale image, then do a histogram of this image. If rgb2ind worked, you should only see 6 spikes in the histogram.
BTW, to be able to reconstruct your problem, you used the peppers.png image that is built-in to MATLAB's system path. As such, this is what I did to describe what I'm talking about:
RGB = imread('peppers.png');
%// Your code
[X,map] = rgb2ind(RGB,6,'nodither');
X = rgb2ind(RGB, map);
rgb=ind2rgb(X,map);
rgb_uint8=uint8(rgb*255+0.5);
imshow(rgb_uint8);
%// My code - Double check colour distribution
figure;
imhist(rgb2gray(rgb_uint8));
axis tight;
This is the figure I get:
As you can see, there are 6 spikes in our histogram. If there are truly 6 unique colours when you ran your code, then there should be an equivalent of 6 equivalent grayscale intensities when you convert the image into grayscale, and the histogram above verifies our findings.
As such, you are quantizing your image to 6 colours, but it doesn't look like it due to quantization noise of your image.
Don't doubt of your result, the image contains exactly 6 colours.
As explained in the Matlab documentation, the rgb2ind function returns an indexed matrix (X in your code) and a colormap (map in your code). So if you want to check the number of colours in X, you can simply check the size of the colormap: size(map)
In your case the size will be 6x3: 6 colours described on 3 channels (red, greed and blue).

what is the difference between image vs imagesc in matlab

I want to know the difference between imagesc & image in matlab
I used this example to try to figure out the difference beween the two but i couldn't explain the difference in the output images by myself; could you help me with that ?
I = rand(256,256);
for i=1:256
for j=1:256
I(i,j) = j;
end
end
figure('Name','Comparison between image et imagesc')
subplot(2,1,1);image(I);title('using image(I)');
subplot(2,1,2);imagesc(I);title('using imagesc(I)');
figure('Name','gray level of image');
image(I);colormap('gray');
figure('Name','gray level of imagesc');
imagesc(I);colormap('gray');
image displays the input array as an image. When that input is a matrix, by default image has the CDataMapping property set to 'direct'. This means that each value of the input is interpreted directly as an index to a color in the colormap, and out of range values are clipped:
image(C) [...] When C is a 2-dimensional MxN matrix, the elements of C are used as indices into the current colormap to determine the color. The
value of the image object's CDataMapping property determines the
method used to select a colormap entry. For 'direct' CDataMapping (the default), values in C are treated as colormap indices (1-based if double, 0-based if uint8 or uint16).
Since Matlab colormaps have 64 colors by default, in your case this has the effect that values above 64 are clipped. This is what you see in your image graphs.
Specifically, in the first figure the colormap is the default parula with 64 colors; and in the second figure colormap('gray') applies a gray colormap of 64 gray levels. If you try for example colormap(gray(256)) in this figure the image range will match the number of colors, and you'll get the same result as with imagesc.
imagesc is like image but applying automatic scaling, so that the image range spans the full colormap:
imagesc(...) is the same as image(...) except the data is scaled to use the full colormap.
Specifically, imagesc corresponds to image with the CDataMapping property set to 'scaled':
image(C) [...] For 'scaled' CDataMapping, values in C are first scaled according to the axes CLim and then the result is treated as a colormap index.
That's why you don't see any clipping with imagesc.

How to add a new colormap on a raster dataset?

I am trying to replicate some ArcGIS functionality in Matlab, specifically the Add Colormap function. The Add Colormap function in ArcGIS associates a .clr file with the TIFF image so that the image has a custom color scheme associated with the TIFF when viewed.
My TIFF images have up to 6 values (1 - 6) in unsigned 8-bit integer format. You can see from the screenshot that some of the images have only 1, 2, or 3 values, while others have 6 values--resulting in variable on-screen color rendering.
I see that Matlab has colormap functionality, however, it appears to be designed only for figures, rather than for TIFF files. How can I associate a colormap with these TIFF images in Matlab so that when I view them (e.g. in ArcGIS), they have a custom color scheme?
As some of the commenters have pointed out, the colormap functionality isn't actually limited to just figures. The colormap concept is really just a lookup table that maps a particular value (index) to specific color (in RGB, typically).
If you check out the documentation for imwrite, you will see that you can actually specify a colormap as the second input to the function.
load mri
im = squeeze(D(:,:,12));
% This is an indexed image (M x N)
% Save without specifying a colormap
imwrite(im, 'nocolormap.tif')
Now to save with a colormap
imwrite(im, heat, 'colormap.tif')
The other alternative, is to create an RGB image within MATLAB and save this image without providing a colormap to imwrite. You can either create this image manually
% Normalize a little bit for display
im = double(im) ./ max(im(:));
output = repmat(im, [1 1 3]); % Make the image (M x N x 3)
imwrite(output, 'rgb_grayscale.tif')
Or you can use the built-in functions gray2rgb or ind2rgb to convert an indexed image to an RGB image using a specific colormap.
rgb_image = gray2rgb(im, jet);
imwrite(rgb_image, 'rgb_jet.tif')
One thing that is pretty important to remember in all of this is that by default, any MATLAB colormap only has 64 colors. So if you need more colors than that, you can specify it when constructing the colormap
size(gray)
64 3
size(gray(1000))
1000 3
This is particularly important if you're trying to display high fidelity data.

How to display a Gray scale image using boundary defined in another binary image

I have a original gray scale image(I m using mammogram image with labels outside image).
I need to remove some objects(Labels) in that image, So i converted that grayscale image to a binary image. Then i followed the answer method provided in
How to Select Object with Largest area
Finally i extracted an Object with largest area as binary image. I want that region in gray scale for accessing and segmenting small objects within that. For example. Minor tissues in region and also should detect its edge.
**
How can i get that separated object region as grayscale image or
anyway to get the largest object region from gray scale directly
without converting to binary or any other way.?
**
(I am new to matlab. I dono whether i explained it correctly or not. If u cant get, I ll provide more detail)
If I understood you correctly, you are looking to have a gray image with only the biggest blob being highlighted.
Code
img = imread(IMAGE_FILEPATH);
BW = im2bw(img,0.2); %%// 0.2 worked to get a good area for the biggest blob
%%// Biggest blob
[L, num] = bwlabel(BW);
counts = sum(bsxfun(#eq,L(:),1:num));
[~,ind] = max(counts);
BW = (L==ind);
%%// Close the biggest blob
[L,num] = bwlabel( ~BW );
counts = sum(bsxfun(#eq,L(:),1:num));
[~,ind] = max(counts);
BW = ~(L==ind);
%%// Original image with only the biggest blob highlighted
img1 = uint8(255.*bsxfun(#times,im2double(img),BW));
%%// Display input and output images
figure,
subplot(121),imshow(img)
subplot(122),imshow(img1)
Output
If I understand your question correctly, you want to use the binary map and access the corresponding pixel intensities in those regions.
If that's the case, then it's very simple. You can use the binary map to identify the spatial co-ordinates of where you want to access the intensities in the original image. Create a blank image, then copy over these intensities over to the blank image using those spatial co-ordinates.
Here's some sample code that you can play around with.
% Assumptions:
% im - Original image
% bmap - Binary image
% Where the output image will be stored
outImg = uint8(zeros(size(im)));
% Find locations in the binary image that are white
locWhite = find(bmap == 1);
% Copy over the intensity values from these locations from
% the original image to the output image.
% The output image will only contain those pixels that were white
% in the binary image
outImg(locWhite) = im(locWhite);
% Show the original and the result side by side
figure;
subplot(1,2,1);
imshow(im); title('Original Image');
subplot(1,2,2);
imshow(outImg); title('Extracted Result');
Let me know if this is what you're looking for.
Method #2
As suggested by Rafael in his comments, you can skip using find all together and use logical statements:
outImg = img;
outImg(~bmap) = 0;
I decided to use find as it less obfuscated for a beginner, even though it is less efficient to do so. Either method will give you the correct result.
Some food for thought
The extracted region that you have in your binary image has several holes. I suspect you would want to grab the entire region without any holes. As such, I would recommend that you fill in these holes before you use the above code. The imfill function from MATLAB works nicely and it accepts binary images as input.
Check out the documentation here: http://www.mathworks.com/help/images/ref/imfill.html
As such, apply imfill on your binary image first, then go ahead and use the above code to do your extraction.

Resources