I'm trying to extract a (very) large number of subimages from a large grayscale TIF file and save each image as a GIF, PNG, or even another TIF using MATLAB. I'm able to display the individual images using imshow(sub(:,:,1),cmap) but when I try to write the data to an image file, the generated files are just white squares 101x101 px. Using the cmap argument in imwrite produces the same result, as does changing the image format (I've tried with PNG, TIF, GIF, and JPG with no luck). The file a.tif is 16 bit according to the property menu in Windows Explorer. Any help is appreciated. I'm really at wit's end with this.
% Import coordinates array and correct for multiplication by 10
datafile = 'data.xlsx';
coords = xlsread(datafile,1,'G2:H13057');
x = coords(:,1) ./ 10;
y = coords(:,2) ./ 10;
r = 50;
[img, cmap] = imread('a.tif'); % import the image
s = 2*r+1; % scalar of size of each submatrix in the array (sise of image)
sub = zeros(s,s,num); % create 3D matrix/array of matrices. Each submatrix corresponds to 50 px box around each point
i = 1:4;
subrgb = zeros(s,s,num);
for i=1:4
sub(:,:,i) = img((y(i)-r):(y(i)+r),(x(i)-r):(x(i)+r));
filename = 'dot_%d.png';
filename = sprintf(filename,i);
imwrite(sub(:,:,i),filename,'png');
end
Try changing the line:
sub = zeros(s,s,num);
to:
sub = zeros(s,s,num,class(img));
I assume that the problem is that sub is of type double.
Good luck
Related
I'm using the following code to show an image from array which previously converted to array. But the image not show correctly:
I = imread('ut.jpg');
image=mat2gray(I);
imshow(image);
FID = fopen('FileName.txt', 'w');
if FID == -1, error('Cannot create file.'); end
fprintf(FID, '%g %g %g ... %g \n', image);
fclose(FID);
x = 100*rand(512,1500);
fileID = fopen('FileName.txt','w');
fprintf(fileID,'%f',x);
fclose(fileID);
imshow(x);
Bitmaps consist of two things. First: 3 matrices with colour intensities, graduated as uint, numbers from 0 to 255. Second: A header containing information regarding size, colordepth, filelength, etc.
Your program does not create correct images because it misses a header.
Regarding the matlab procedure:
By using imread on a RGB image, you are automatically creating a matrix. If you convert it to grayscale with rgb2gray you will have a single matrix of uint (without any additional layers).
If you want to safe your image after processing it simply use:
I = imread('ut.jpg');
% Convert, do smth e.g. I_new = rgb2gray(I);
filename = 'myNewImage.jpg';
imwrite(I_new,filename);
By using imwrite you automatically create a correct header.
the input images are a.jpg and b.jpg
these two image stored in for example comp folder.and want to write the segmented image in segment folder.but I think for looping problem segmentation repeated for so many times for each image.And I could't solve the problem.
here is my code
Resultado='C:\Users\Nurul\Desktop\picsegment';
srcFiles = dir('C:\Users\Nurul\Desktop\comp\*.jpg');
for i = 1 : length(srcFiles)
filename = strcat('C:\Users\Nurul\Desktop\comp\',srcFiles(i).name);
a = imread(filename);
LLL=a;
s=regionprops(LLL);
figure,imshow(LLL); title('segmented Image');
hold on
for J=1:numel(s)
rectangle('Position',s(J).BoundingBox,'edgecolor','g')
end
im1=LLL;
baseFileName = sprintf('%d.jpg', i); % e.g. "1.png"
fullFileName = fullfile(Resultado, baseFileName);
imwrite(im1, fullFileName);
end
plz help
thanks
You are saving your data as jpg, big mistake!
Still, if you want to keep the data saved as jpg, remember that it will not be saved as a binary image, and that means you need to binarize it again! Otherwise, every little pixel noise will be detected as data by regionprops, thats why you get so many squares.
Just add
a = imread(filename);
a=im2bw(a,0.5); % Add this line. The fancy way would be im2bw(a,graythresh(a)), but 0.5 will do in your case
LLL=a;
I have a image.mat of about 4MB.
The size of some image file can also be 4MB.
Can the image.mat be transferred to image file?
I tried this, but that doesn't do the trick:
load image.mat %load Iw
imshow(mat2gray(Iw))
imwrite(Iw,'image.png');
IwNew = imread('image.png');
isequal(Iw,IwNew)
The result is 0; am I misunderstanding something?
The number in Iw are very important, so Iw can not be changed.
Actually my real problem is how to store float numbers into an image?
But MATLAB does not support Tiff 6.0, so I'll have to find some workaround.
I am doing a blind watermarking,and the decimal fraction of a number in Iw is important because it involve the information about another image.So the Iw can not be changed.
Actually,Mathematica can store floating floating-point data:
But my programs are all in MATLAB.
According to Matlab documentation:
"If A is a grayscale or RGB color image of data type double or single, then imwrite assumes that the dynamic range is [0,1] and automatically scales the data by 255 before writing it to the file as 8-bit values."
In other words: imwrite performs automatic conversion from double to uint8.
if you wish to keep the values of Iw unchanged, save it as a mat file and not as an image.
If you do want to save it as an image - there is going to be some loss of information. In this case, there are two things which need to be done:
Change the dynamic range of the matrix to [0,1]. (in your case, the range is between -0.0035 to 255.0035. Also, the matrix contain inf values).
If you want to get an equality, scale IwNew by 255, and convert it to uint8.
Code:
load image.mat %load Iw
%step 1, change the dynamic range of the image to [0,1].
%One way to do it is by using mat2gray on each channel separately.
Iw(:,:,1) = mat2gray(Iw(:,:,1));
Iw(:,:,2) = mat2gray(Iw(:,:,2));
Iw(:,:,3) = mat2gray(Iw(:,:,3));
%write the image to file
imwrite(Iw,'image.png');
%read the image
IwNew=imread('image.png');
%scale it, and convert to uint 8
Iw2 = uint8(Iw*255);
%check equality
isequal(Iw2,IwNew)
Result:
ans =
1
Alternatively, if you want to convert IwNew to double, perform the following:
%conversion to double
Iw2 = double(IwNew)/255;
Notice that in this case, the matrices won't be equal to one another,
Due to the loss of information which happened during the imwrite process (conversion from double to uint8).
Instead, they will be epsilon-close to one another, where epsilon = 0.0001.
In order to test this, write the following:
%equality check
sum(abs(Iw2(:)-Iw(:))>0.0001)
Result:
ans =
0
My MATLAB (R2010a) with the image processing toolbox is perfectly capable of storing double-valued pixel values, and retrieve them without loss of data.
Here's a shameless copy of this answer:
% Some random, data of type double
A = 7.6*rand(10);
% Construct TIFF image...
t = Tiff('test.tif', 'w');
% ...with these custom parameters...
tagstruct = struct(...
'ImageLength' , size(A,1),...
'ImageWidth' , size(A,2),...
'Compression' , Tiff.Compression.None,...
'SampleFormat' , Tiff.SampleFormat.IEEEFP,... % floating point
'Photometric' , Tiff.Photometric.MinIsBlack,...
'BitsPerSample' , 64,... % 8 bytes / double
'SamplesPerPixel' , 1,...
'PlanarConfiguration', Tiff.PlanarConfiguration.Chunky);
t.setTag(tagstruct);
% ...and write it to disk.
t.write(A);
t.close();
% Read the data actually written, and check if all
% information was indeed preserved:
B = imread('test.tif');
isequal(A,B)
Result:
ans =
1
Adjust in obvious ways if you have more than 1 channel (RGB).
Below is the current working code in python using PIL for highlighting the difference between the two images. But rest of the images is blacken.
Currently i want to show the background as well along with the highlighted image.
Is there anyway i can keep the show the background lighter and just highlight the differences.
from PIL import Image, ImageChops
point_table = ([0] + ([255] * 255))
def black_or_b(a, b):
diff = ImageChops.difference(a, b)
diff = diff.convert('L')
# diff = diff.point(point_table)
h,w=diff.size
new = diff.convert('RGB')
new.paste(b, mask=diff)
return new
a = Image.open('i1.png')
b = Image.open('i2.png')
c = black_or_b(a, b)
c.save('diff.png')
!https://drive.google.com/file/d/0BylgVQ7RN4ZhTUtUU1hmc1FUVlE/view?usp=sharing
PIL does have some handy image manipulation methods,
but also a lot of shortcomings when one wants
to start doing serious image processing -
Most Python lterature will recomend you to switch
to use NumPy over your pixel data, wich will give
you full control -
Other imaging libraries such as leptonica, gegl and vips
all have Python bindings and a range of nice function
for image composition/segmentation.
In this case, the thing is to imagine how one would
get to the desired output in an image manipulation program:
You'd have a black (or other color) shade to place over
the original image, and over this, paste the second image,
but using a threshold (i.e. a pixel either is equal or
is different - all intermediate values should be rounded
to "different) of the differences as a mask to the second image.
I modified your function to create such a composition -
from PIL import Image, ImageChops, ImageDraw
point_table = ([0] + ([255] * 255))
def new_gray(size, color):
img = Image.new('L',size)
dr = ImageDraw.Draw(img)
dr.rectangle((0,0) + size, color)
return img
def black_or_b(a, b, opacity=0.85):
diff = ImageChops.difference(a, b)
diff = diff.convert('L')
# Hack: there is no threshold in PILL,
# so we add the difference with itself to do
# a poor man's thresholding of the mask:
#(the values for equal pixels- 0 - don't add up)
thresholded_diff = diff
for repeat in range(3):
thresholded_diff = ImageChops.add(thresholded_diff, thresholded_diff)
h,w = size = diff.size
mask = new_gray(size, int(255 * (opacity)))
shade = new_gray(size, 0)
new = a.copy()
new.paste(shade, mask=mask)
# To have the original image show partially
# on the final result, simply put "diff" instead of thresholded_diff bellow
new.paste(b, mask=thresholded_diff)
return new
a = Image.open('a.png')
b = Image.open('b.png')
c = black_or_b(a, b)
c.save('c.png')
Here's a solution using libvips:
import sys
from gi.repository import Vips
a = Vips.Image.new_from_file(sys.argv[1], access = Vips.Access.SEQUENTIAL)
b = Vips.Image.new_from_file(sys.argv[2], access = Vips.Access.SEQUENTIAL)
# a != b makes an N-band image with 0/255 for false/true ... we have to OR the
# bands together to get a 1-band mask image which is true for pixels which
# differ in any band
mask = (a != b).bandbool("or")
# now pick pixels from a or b with the mask ... dim false pixels down
diff = mask.ifthenelse(a, b * 0.2)
diff.write_to_file(sys.argv[3])
With PNG images, most CPU time is spent in PNG read and write, so vips is only a bit faster than the PIL solution.
libvips does use a lot less memory, especially for large images. libvips is a streaming library: it can load, process and save the result all at the same time, it does not need to have the whole image loaded into memory before it can start work.
For a 10,000 x 10,000 RGB tif, libvips is about twice as fast and needs about 1/10th the memory.
If you're not wedded to the idea of using Python, there are a few really simple solutions using ImageMagick:
“Diff” an image using ImageMagick
I have a real time application which receives jpg images coded in base64. I do not know how to show the image in matlab without having to save the image in the disk and open it afterwards.
This is the code I have so far, that saves the image in the disk before showing it:
raw = base64decode(imageBase64, '', 'java');
fid = fopen('buffer.jpg', 'wb');
fwrite(fid, raw, 'uint8');
fclose(fid);
I = imread('buffer.jpg');
imshow(I);
Thanks!
You can do it with the help of Java. Example:
% get a stream of bytes representing an endcoded JPEG image
% (in your case you have this by decoding the base64 string)
fid = fopen('test.jpg', 'rb');
b = fread(fid, Inf, '*uint8');
fclose(fid);
% decode image stream using Java
jImg = javax.imageio.ImageIO.read(java.io.ByteArrayInputStream(b));
h = jImg.getHeight;
w = jImg.getWidth;
% convert Java Image to MATLAB image
p = reshape(typecast(jImg.getData.getDataStorage, 'uint8'), [3,w,h]);
img = cat(3, ...
transpose(reshape(p(3,:,:), [w,h])), ...
transpose(reshape(p(2,:,:), [w,h])), ...
transpose(reshape(p(1,:,:), [w,h])));
% check results against directly reading the image using IMREAD
img2 = imread('test.jpg');
assert(isequal(img,img2))
The first part of decoding the JPEG byte stream is based on this answer:
JPEG decoding when data is given in array
The last part converting Java images to MATLAB was based on this solution page:
How can I convert a "Java Image" object into a MATLAB image matrix?
That last part could also be re-written as:
p = typecast(jImg.getData.getDataStorage, 'uint8');
img = permute(reshape(p, [3 w h]), [3 2 1]);
img = img(:,:,[3 2 1]);
imshow(img)