I have an Image of double, I want to show it with unsigned int 16 bit, so I do:
I = im2uint16(I);
figure;imshow(I);title('Image being saved')
This shows this (with its normal noise):
Now I want to write this image with .png with Bit Depth 16 Bit. I do:
imwrite(I,'image.png','BitDepth',16);
And now the image, opened with Photoshop CS5, or Windows Photo Viwer looks like this: (the noise is magically disappeared):
Can someone explain this strange behaviour ?
How to Reproduce this error
Download in C:\test\ the image I used here:
Now run this script:
I = im2double(imread('C:\test\test_matlab.tif'));
% Add gaussian noise with variance = 0.0012
I = imnoise(I,'gaussian',0,0.0012);
figure,imshow(I);
imwrite(I,'C:\test\withNoise.tif');
And compare the figure in matlab versus the file saved
It's difficult to say because you didn't give enough data to reproduce, but I'd guess the problem is related to a display issue: the image is larger than you physical display window, hence some downsampling must be applied to display it. Depending on how that resampling is done, the result can be -in this scenario- very different, visually.
Suppose that matlab applies a nearest-neighbour resampling for its display, that would explain why the image looks very noisy; instead, if another image viewer applies a bilinear interpolation or something similar, that would amount to a local average that practically filters out the white noise.
To test this, try the same with a small image. Or try zooming the apparently clean image, to see it at real size (100% : one image pixel = one display pixel)
Update: See also here
Here's what I did:
%# read the image (why is it so big?)
I = im2double(imread('https://p7o1zg.bay.livefilestore.com/y1pcQVsmssygbS4BLW24_X1E09BKt_Im-2yAxXBqWesC47gpv5bdFZf962T4it1roSaJkz5ChLBS0cxzQe6JfjDNrF7x-Cc12x8/test_matlab.tif?psid=1'));
%# add noise
I = imnoise(I,'gaussian',0,0.0012);
%# write tiff
imwrite(I,'withNoise.tif');
%# read the tiff again
I2 = imread('withNoise.tif');
class(I2) %# -- oopsie, it's uint8 now!
%# convert to uint16 as in original post
I = im2uint16(I);
%# writ again
imwrite(I,'withNoise16.png','bitDepth',16);
%# read it
I2 = imread('withNoise16.png');
%# compare
all(all(I==I2)) %# everything is equal
So there is no funky stuff going on in writing/reading the image (though you lose some information in the bit conversion - your original image only takes up about a third of the dynamic range, so you'll lose more information that if you stretched the contrast before conversion).
However, the image is 2k-by-2k. When I only look at the top right corner of the image (taking 500-by-500 pix), it is displayed the same in Matlab and other graphics programs. So I bet it's a matter of resampling your image that Matlab does differently from other programs. As #leonbloy suggests, Matlab may be doing nearest-neighbor resampling, while other programs would do interpolation.
Related
I have a 2D CNN model where I perform a classification task. My images are all coming from a sensor data after conversion.
So, normally, my way is to convert them into images using the following approach
newsize = (9, 1000)
pic = acc_normalized[0]
img = Image.fromarray(np.uint8(pic*255), 'L')
img = img.resize(newsize)
image_path = "Images_Accel"
image_name = "D1." + str(2)
img.save(f"{image_path}/{image_name}.jpeg")
This is what I obtain:
However, their precision is sort of important. For instance, some of the numerical values are like:
117.79348187327987 or 117.76568758022673.
As you see in the above line, their difference is the digits, when I use uint8, it only takes 117 to when converting it into image pixels and it looks the same, right? But, I'd like to make them different. In some cases, the difference is even at the 8th or 10th digit.
So, when I try to use mode F and save them .jpeg in Image.fromarray line it gives me error and says that PIL cannot write mode F to jpeg.
Then, I tried to first convert them RGB like following;
img = Image.fromarray(pic, 'RGB')
I am not including np.float32 just before pic or not multiplying it by 255 as it is. Then, I convert this image to grayscale. This is what I got for RGB image;
After converting RGB into grayscale:
As you see, it seems that there is a critical different between the first pic and the last pic. So, what should be the proper way to use them in 2D CNN classification? or, should I convert them into RGB and choose grayscale in CNN implementation and a channel of 1? My image dimensions 1000x9. I can even change this dimension like 250x36 or 100x90. It doesn't matter too much. By the way, in the CNN network, I am able to get more than 90% test accuracy when I use the first-type of image.
The main problem here is using which image conversion method I'll be able to take into account those precision differences across the pixels. Would you give me some idea?
---- EDIT -----
Using .tiff format I made some quick comparisons.
First of all, my data looks like the following;
So, if I convert this first reading into an image using the following code where I use np.float64 and L gives me a grayscale image;
newsize = (9, 1000)
pic = acc_normalized[0]
img = Image.fromarray(np.float64(pic), 'L')
img = img.resize(newsize)
image_path = "Images_Accel"
image_name = "D1." + str(2)
img.save(f"{image_path}/{image_name}.tiff")
It gives me this image;
Then, the first 15x9 matrix seems like following image; The contradiction is that if you take a closer look at the numerical array, for instance (1,4) member, it's a complete black where the numerical array is equal to 0.4326132099074307. For grayscale images, black means that it's close to 0 cause it makes white if it's close to 1. However, if it's making a row operation, there is another value closer to 0 and I was expecting to see it black at (1,5) location. If it does a column operation, there is again something wrong. As I said, this data has been already normalized and varies within 0 and 1. So, what's the logic that it converts the array into an image? What kind of operation it does?
Secondly, if I first get an RGB image of the data and then convert it into a grayscale image, why I am not having exactly the same image as what I obtained first? Should the image coming from direct grayscale conversion (L method, np.float64) and the one coming from RGB-based (first I get RGB then convert it to grayscale) be the same? There is a difference in black-white pixels in those images. I do not know why we have it.
---- EDIT 2 ----
.tiff image with F mode and np.float32 gives the following;
I don't really understand your question, but you seem to want to store image differences that are less than 1, i.e. less than the resolution of integer values.
To do so, you need to use an image format that can store floats. JPEG, PNG, GIF, TGA and BMP cannot store floats. Instead, use TIFF, EXR or PFM formats which can handle floats.
Alternatively, you can create 16-bit PNG images wherein each pixel can store values in range 0..65535. So, say the maximum difference you wanted to store was 60 you could calculate the difference and multiply it by 1000 and round it to make an integer in range 0..60000 and store as 16-bit PNG.
You could record the scale factor as a comment within the image if it is variable.
I am a new user on image processing via Matlab. My first aim is applying the article and comparing my results and authors' results.
The article can be found here: http://arxiv.org/ftp/arxiv/papers/1306/1306.0139.pdf
First problem, Image Quality: In Figure 7, masks are defined but I couldn't reach the mask data set, and I use the screenshot so image quality is low. In my view, it can effect the results. Is there any suggestions?
Second problem, Merging images: I want to apply mask 1 on the Lena. But I don't want to use paint =) On the other hand, is it possible merging the images and keeping the lena?
You need to create the mask array. The first step is probably to turn your captured image from Figure 7 into a black and white image:
Mask = im2bw(Figure7, 0.5);
Now the background (white) is all 1 and the black line (or text) is 0.
Let's make sure your image of Lena that you got from imread is actually grayscale:
LenaGray = rgb2gray(Lena);
Finally, apply your mask on Lena:
LenaAndMask = LenaGray.*Mask;
Of course, this last line won't work if Lena and Figure7 don't have the same size, but this should be an easy fix.
First of all, You have to know that this paper is published in archive. when papers published in archive it is always a good idea to know more about the author and/or the university that published the paper.
TRUST me on that: you do not need to waste your time on this paper.
I understand your demand: but it is not a good idea to do get the mask by doing print screen. The pixel values that can be achieved by using print screen may not be the same as the original values. The zoom may change the size. so you need to be sure that the sizes are the same.
you can do print screen. past the image.
crop the mask.
convert rgb to gray scale.
threshold the gray scale to get the binary.
if you saved the image as jpeg. distortions because of high frequency edges will change edge shape.
Trying to segment out the lung region, I am having a lot of trouble. Incoming image is like this: (This is essentially a jpg conversion, and each pixel is 8 bits.)
I = dicomread('000019.dcm');
I8 = uint8(I / 256);
B = im2bw(I8, 0.007);
segmented = imclearborder(B);
Above script generates:
Q-1
I am interested in entire inner black part with white matter as well. I have started matlab couple of days ago, so not quite getting how can I do it. If it is not clear to you what kind of output I want, let me know-I will upload an image. But I think there is no need.
Q-2
in B = im2bw(I8, 0.007); why I need to give a threshold so low? with higher thresholds everything is white or black. I have read the documentation and as I understand it, the pixels with value less than 0.007 are marked black and everything above is white. Is it because of my 16-to-8 bit conversion?
An other automatic solution that I did quickly using ImageJ (there are the same algorithms in MatLab):
Automatic thresholding using Huang or Li in the color space of your choice (all of them work)
Opening with a structuring element of type disk (delete the small components)
Connected components labeling.
Delete the components that touches the border of the images.
Fill holes.
And you have a clean result.
Here's a working solution in python using OpenCV:
import cv2 #openCV
import numpy as np
filename = 'zFrkx.jpg' #name of file in quotations here... assumes file is in same dir as .py file
img_gray = cv2.imread(filename, 0) #converts jpg image to grayscale representation
min_val = 100 #try shifting these around to expand or collapse area of interest
max_val = 150
ret, lung_mask = cv2.threshold(img_gray, min_val, max_val, cv2.THRESH_BINARY_INV) #fixed threshold uses values you'll def above
lung_layer = cv2.bitwise_and(img_gray, img_gray, mask = lung_mask)
cv2.imwrite('cake.tif', lung_layer) #outputs desired layer to current working dir
I tried running the script with threshold values set arbitrarily to 100,150 and got the following result, from which you could select the largest continuous element using dilation and segmentation techniques (http://docs.opencv.org/master/d3/db4/tutorial_py_watershed.html#gsc.tab=0).
Also, I suggest you crop the bottom and top X pixels to cut out text since no lung will fill the top or bottom of the picture.
Use tif instead of jpg format to avoid compression related artifact.
I know you noted that you'd like the medullar(?) white matter, too. Would be glad to help with that, but could you first explain in plain english how your shared matlab code works? Seems to work pretty well for the WM.
Hope this helps!
I have tried image subtraction in MatLab, but realised that there is a big blue patch on the image. Please see image for more details.
Another images showing where the blue patch approximately cover till.
The picture on the left in the top 2 images shows the picture after subtraction.You can ignore the picture on the right of the top 2 images. This is one of the original image:
and this is the background I am subtracting.
The purpose is to get the foreground image and blob it, followed by counting the number of blobs to see how many books are stacked vertically from their sides. I am experimenting how blobs method works on matlab.
Do anybody have any idea? Below is the code on how I carry out my background subtraction as well as display it. Thanks.
[filename, user_canceled] = imgetfile;
fullFileName=filename;
rgbImage = imread(fullFileName);
folder = fullfile('C:\Users\Aaron\Desktop\OPENCV\Book Detection\Sample books');
baseFileName = 'background.jpg';
fullFileName = fullfile(folder, baseFileName);
backgroundImage =imread(fullFileName);
rgbImage= rgbImage - backgroundImage;
%display foreground image after background substraction%%%%%%%%%%%%%%
subplot( 1,2,1);
imshow(rgbImage, []);
Because the foreground objects (i.e. the books) are opaque, the background does not affect those pixels at all. In other words, you are subtracting out something that is not there. What you need is a method of detecting which pixels in your image correspond to foreground, and which correspond to background. Unfortunately, solving this problem might be at least as difficult as the problem you set out to solve in the first place.
If you just want a pixel-by-pixel comparison with the background you could try something like this:
thresh = 250;
imdiff = sum(((rgbImage-backgroundImage).^2),3);
mask = uint8(imdiff > thresh);
maskedImage = rgbImage.*cat(3,mask,mask,mask);
imshow(maskedImage, []);
You will have to play around with the threshold value until you get the desired masking. The problem you are going to have is that the background is poorly suited for the task. If you had the books in front of a green screen for example, you could probably do a much better job.
You are getting blue patches because you are subtracting two color RGB images. Ideally, in the difference image you expect to get zeros for the background pixels, and non-zeros for the foreground pixels. Since you are in RGB, the foreground pixels may end up having some weird color, which does not really matter. All you care about is that the absolute value of the difference is greater than 0.
By the way, your images are probably uint8, which is unsigned. You may want to convert them to double using im2double before you do the subtraction.
I'd like to show an image and plot something on it and then save it as an image with the same size as the original one. My MATLAB code is:
figH = figure('visible','off');
imshow(I);
hold on;
% plot something
saveas(figH,'1','jpg');
close(figH);
But the resulting image "1.jpg" has saved non-image areas in the plot as well as the image. How can I solve this problem?
The reason your new image is bigger than your original is because the SAVEAS function saves the entire figure window, not just the contents of the axes (which is where your image is displayed).
Your question is very similar to another SO question, so I'll first point out the two primary options encompassed by those answers:
Modify the raw image data: Your image data is stored in variable I, so you can directly modify the image pixel values in I then save the modified image data using IMWRITE. The ways you can do this are described in my answer and LiorH's answer. This option will work best for simple modifications of the image (like adding a rectangle, as that question was concerned with).
Modify how the figure is saved: You can also modify how you save the figure so that it better matches the dimensions of your original image. The ways you can do this (using the PRINT and GETFRAME functions instead of SAVEAS) are described in the answers from Azim, jacobko, and SCFrench. This option is what you would want to do if you were overlaying the image with text labels, arrows, or other more involved plot objects.
Using the second option by saving the entire figure can be tricky. Specifically, you can lose image resolution if you were plotting a big image (say 1024-by-1024 pixels) in a small window (say 700-by-700 pixels). You would have to set the figure and axes properties to accommodate. Here's an example solution:
I = imread('peppers.png'); %# Load a sample image
imshow(I); %# Display it
[r,c,d] = size(I); %# Get the image size
set(gca,'Units','normalized','Position',[0 0 1 1]); %# Modify axes size
set(gcf,'Units','pixels','Position',[200 200 c r]); %# Modify figure size
hold on;
plot(100,100,'r*'); %# Plot something over the image
f = getframe(gcf); %# Capture the current window
imwrite(f.cdata,'image2.jpg'); %# Save the frame data
The output image image2.jpg should have a red asterisk on it and should have the same dimensions as the input image.