How to improve the quality of images in movie saved via videowriter in matlab? - matlab-figure

I am making a movie in MATLAB with 3 subplot. When I am saving a figure as png it looks great but saving in video mode looks ugly. How to improve the images of video. Specially to remove extra white space regions. Also is there any possibility to see 360 rotation in elevation angle. I am using Linux machine.
Here is the image in png
Same image saved in video mode
MATLAB Code
v = VideoWriter('myFile.avi');
v.FrameRate = 1;
open(v);
h1=figure;
set(gcf, 'PaperSize', [15 5]);
set(gcf, 'PaperPosition', [0 0 15 5]);
d=linspace(0,360,15); % azimuth angle
first=length(d);
for j=1:first
viewScene(j,:)=[-d(j),1];
end
d=linspace(-90,90,10); % elevation angle
for j=1:length(d)
viewScene(j+first,:)=[-38,d(j)];
end
final=first+length(d);
for t=1:final
for i=1:3
subplot(1,3,i)
plot3()
hold on
plot3()
axis vis3d;
view(viewScene(t,:))
set(findobj(gcf,'type','axes'),'visible','off');
end
frame=getframe(gcf);
writeVideo(v,frame)
hold off
end
close(v);

By default, VideoWriter creates a Motion JPEG-AVI file. You can modify the Quality parameter to get better quality images. Quality = 100 gives the best results.
Another option is to write Uncompressed AVI's which will write the data as is without any corruption but can result in larger files. You can also try and convert your image into an indexed image using RGB2IND function and write an Indexed AVI which is also lossless and will give roughly 1/3rd the size of Uncompressed AVIs.
You can also give MPEG-4 a shot.
The thing is that MJPEG and MPEG-4, they are used for compressing natural scenic images where high-frequency information i.e. edges and transitions is lost. Your image is not a natural scenery and hence this is expected.

Related

Speeding up postscript image print

I am developing an application that prints an image via generating postscript output and sending it to the printer. So I convert my image to jpg, then to ASCII85 string, append this data to postscript file and send it to the printer.
Output looks like:
%!
{/DeviceRGB setcolorspace
/T currentfile/ASCII85Decode filter def
/F T/DCTDecode filter def
<</ImageType 1/Width 3600/Height 2400/BitsPerComponent
8/ImageMatrix[7.809 0 0 -8.053 0 2400]/Decode
[0 1 0 1 0 1]/DataSource F>> image F closefile T closefile}
exec
s4IA0!"_al8O`[\!<E1.!+5d,s5<tI7<iNY!!#_f!%IsK!!iQ0!?(qA!!!!"!!!".!?2"B!!!!"!!!
---------------------------------------------------------------
ASCII85 data
---------------------------------------------------------------
bSKs4I~>
showpage
My goal now is to speed up this code. Now it takes about 14 seconds from sending .ps to the printer to the moment printer actually starts printing the page (for the 2MB file).
Why is it so slow?
Maybe I can reformat the image so printer doesn't need to perform an affine transform of the image?
Maybe i can use better image encoding?
Any tutorials, clues or advices would be valuable.
One reason its slow is because JPEG is an expensive compression filter. Try using Flate instead. Don't ASCII85 encode the image, send it as binary, that reduces transmission time and removes another filter. Note that jpeg is a lossy compression, so by 'converting to jpeg' you are also sacrificing quality.
You can reduce the amount of effort the printer goes to by creating/scaling the image (before creating the PostScript) so that each image sample matches one pixel in device space. On the other hand, if you are scaling an image up, this means you will need to send more image data to the printer. But usually these days the data connection is fast.
However this is usually hard to do and often defeated by the fact that the printer may not be able to print to the edge of the media, and so may scale the marking operations by a small amount, so that the content fits on the printable area. Its usually pretty hard to figure out if that's going on.
Your ImageMatrix is, well, odd..... It isn't a 1:1 scaling and floating point scale factors are really going to slow down the mapping from user space to device space. And you have a lot of samples to map.
You could also map the image samples into PostScript device space (so that bottom left is at 0,0 instead of top left) which would mean you wouldn't have to flip the CTM In the y axis.
But in short, trying to play with the scale factors is probably not worth it, and most printers optimise these transformations anyway.
The colour model of the printer is usually CMYK, so by sending an RGB image you are forcing the printer to do a colour conversion on every sample in the image. For your image that's more than 8.5 million conversions.

Saving images using Octave but appearing fuzzy upon realoading

I am enrolled in a Coursera Machine Learning course where I am learning about neural networks. I got some hand-written digits data from this link: http://yann.lecun.com/exdb/mnist/
Now I want to convert these data in to .jpg format, and I am using this code.
function nx=conv(x)
nx=zeros(size(x));
for i=1:size(x,1)
c=reshape(x(i,:),20,20);
imwrite(c,'data.jpg','jpg')
nx(i,:)=(imread('data.jpg'))(:)';
delete('data.jpg');
end
end
Then, I run the above code with:
nx=conv(x);
x is 5000 training examples of handwritten digits. Each training example is a 20 x 20 pixel grayscale image of a digit. Each pixel is represented by a floating point number indicating the grayscale intensity at that location.
The 20 x 20 grid of pixels is "unrolled" into a 400-dimensional vector. Each of these training examples becomes a single row in our data matrix x. This gives us a 5000 x 400 matrix x where every row is a training example for a handwritten digit image.
After I run this code, I rewrite an image to disk to check:
imwrite(nx(1,:),'check.jpg','jpg')
However, I find the image is fuzzy. How would I convert these images correctly?
You are saving the images using JPEG, which is a lossy compression algorithm to save the images. Lossy compression algorithms guarantee a high compression ratio, but at the expense of slightly degrading your image. That's probably why you are seeing fuzzy images as it is attributed to compression artifacts.
From the looks of it, you want to save exactly what the data should be to file. As such, use a lossless compression algorithm instead, like PNG. Therefore, change your saving line of code to use PNG:
imwrite(c,'data.png','png')
nx(i,:)=(imread('data.png'))(:)';
delete('data.png');
Also:
imwrite(nx(1,:),'check.png','png')

(Matlab) Performance of Gaussian filtering using imfilter on a binary image

I am currently coding a program to keep track of a running fly in a small chamber, what I want is XY-coordinates of the center of the fly.
For this I first filter each frame with a Gaussian filter using fspecial('gaussian',[30 30],100) and imfilter to get a white "cloud" where the fly is. I need this to reduce noise of the center of the fly.
I convert the outcome into a binary image using im2bw with a certain threshold to get a white blob from the aforementioned cloud.
To get the coordinates, I use regionprops that finds the centroid of the white blob.
It already works fine, but it takes ages - roughly 6 hours for 30 minutes of video; the framerate is 100 fps, though.
I have figured out that the Gaussian filtering takes up most of the time - can I tweak this process somehow?
I read about conv2, which is said to be faster but it does not work on binary images, does it? And converting my binary images to single or double messes them up.
I already worked on the code's performance on other levels, like adjusting the search window etc., so the filtering is what is left as far as I can assess.
Thanks in advance
It might be that the smoothing part is unnecessary, a simple thresholding of your image leads to a pretty clear identification of the fly:
f=rgb2gray(imread('frame.png'));
BW=f>30;
props=regionprops(BW, 'BoundingBox');
imshow(f)
rectangle('Position',props.BoundingBox, 'LineWidth',2, 'EdgeColor','b');
Result:
To answer your question about fast smoothing, you could use FFT-based low-pass filtering instead of a moving gaussian to smoothen your frames much faster. Example for one frame (the mask needs only to be done once):
f=rgb2gray(imread('frame.png'));
D=30;
[x,y]=size(f);
%Generating a disc-shaped binary mask with radius D:
Mask = fspecial('disk',D)==0;
Mask = ~imresize(padarray(Mask, [floor((x/2)-D) floor((y/2)-D)], 1, 'both'), [x y]);
% (Apply this to all the frames:)
MaskedFFT=fftshift(fft2(f));.*Mask;
Filteredf=abs(ifft2(MaskedFFT));
Result:
Original (f)
Filtered (Filteredf)

imread altering image in Matlab?

I am having an issue reading a single image of a stacked tiff in using imread. The tiff is 128-by-126. It reads in just fine with ImageJ, but I try reading it into Matlab for some processing and it creates an odd streak in the center of the image. With the origin of the image in the top left, rows 63 and 64 are repeated as rows 65 and 66, and the last two rows of the image, 125 and 126 are cut off. I can tell this is happening by visual comparison of the image displayed in matlab to the image displayed in ImageJ.
If I take the same tiff stack, and save the first frame in ImageJ, I don't have this issue. Even when displaying the outputted matlab image using ImageJ, I see the same issue. However, I would like to automate the process to save images from several tiff stacks as single tiff files, which I can't do in ImageJ, so I turned to Matlab and ran into this issue. I have included my code below. I tried reading the tiff in two different ways and got the same error. It seems to be related to the tiff stack and how matlab reads in the tiffs. I am using Matlab R2012b.
I have included links below to the static ImageJ image I am seeing and the static matlab image I am seeing. I have also included a link for loading the stacked tiff file that is generating these issues for me.
Note: When I have ImageJ output each frame as an individual tiff and I open the first frame from that output in matlab using the same code below, the image is correctly displayed. The error only occurs when reading in the first frame from the image stack in Matlab.
StackOverflow doesn't support embedding TIFF files, but you can view and download them from these links:
Stacked Tiff File - Data I am working with
What the first frame should look like - ImageJ
What I am seeing when loading the first frame in MATLAB
Code Used to Generate the Image
fname='C:\FileLocation\pcd144_012.tif';
im1=imread(fname,1)
imagesc(im1);
axis image; colormap gray;
I tried reading in the image as a tiff object to see if it solved the problem and this didn't work either. The image has two strips, and the last two lines of the first strip are the same as the first two lines of the last strip, which is why the middle lines seem to be repeated. It seems matlab is indexing reading my image in wrong, likely because it is not a square image. Am I just doing something wrong, or does matlab have a bug with respect to reading in non-square tiffs? Any ideas or suggestions for improvement?
First of, I kinda agree with horchler, that is, there is something wrong in your header.
We can easily observe that the StripByteCounts (15872) does not match width*height (128*126). This could be the reason you see the repetition in row 63 - 64 and 65 - 66.
Since the RowPerStrip = 64 and StripOffsets = [8,15880] may indicate that you have a 128*124 graph, Matlab perhaps uses last two rows in the first 64 rows to pad the missing rows at the beginning of the rest of the rows. So the total row can be filled up to 126. Well, this is just my guess for how Matlab handles the disagreement between dimension and ByteCounts.
After all, to your question, imread indeed alters image in Matlab when reading TIFF without issuing any warning. Bad job in imread reading TIFF, Matlab.
After observing your TIFF frames in one of your links, the TIFF seems to actually have image data with dimension 128*126. So if you trust the dimension indicating in the header, you would probably use fread to read the frames in your TIFF instead of using shaky imfread.
fname='pcd144_012.tif';
tiffInfo = imfinfo(fname);
framIndex = 1;
tiffWidth = tiffInfo(framIndex).Width; % image width
tiffHeight = tiffInfo(framIndex).Height; % image height
tiffStartOffset = tiffInfo(framIndex).StripOffsets(1); % Image data offset start point
tiffEndOffset = tiffInfo(framIndex).StripOffsets(2); % Image data offset end point
fid = fopen(fname);
fseek(fid,tiffStartOffset,'bof');
im1 = fread(fid,tiffWidth*tiffHeight,'*uint16'); % We knew this from BitsPerSample
fclose(fid);
im1 = reshape(im1,tiffWidth,tiffHeight); % Reshape the image data array
figure
imagesc(im1);
colormap gray;
axis xy;
axis image;
Now, while this may solve the weird Matlab imread behavior, however, the above result still does not match the picture you showed in your second link. According to the picture in the second link, it has 300 frames but the one you attached in your first link only has 30 frames. Maybe we are all looking at the wrong picture?

Strange/Magical Image visualization with Matlab

I have an Image of double, I want to show it with unsigned int 16 bit, so I do:
I = im2uint16(I);
figure;imshow(I);title('Image being saved')
This shows this (with its normal noise):
Now I want to write this image with .png with Bit Depth 16 Bit. I do:
imwrite(I,'image.png','BitDepth',16);
And now the image, opened with Photoshop CS5, or Windows Photo Viwer looks like this: (the noise is magically disappeared):
Can someone explain this strange behaviour ?
How to Reproduce this error
Download in C:\test\ the image I used here:
Now run this script:
I = im2double(imread('C:\test\test_matlab.tif'));
% Add gaussian noise with variance = 0.0012
I = imnoise(I,'gaussian',0,0.0012);
figure,imshow(I);
imwrite(I,'C:\test\withNoise.tif');
And compare the figure in matlab versus the file saved
It's difficult to say because you didn't give enough data to reproduce, but I'd guess the problem is related to a display issue: the image is larger than you physical display window, hence some downsampling must be applied to display it. Depending on how that resampling is done, the result can be -in this scenario- very different, visually.
Suppose that matlab applies a nearest-neighbour resampling for its display, that would explain why the image looks very noisy; instead, if another image viewer applies a bilinear interpolation or something similar, that would amount to a local average that practically filters out the white noise.
To test this, try the same with a small image. Or try zooming the apparently clean image, to see it at real size (100% : one image pixel = one display pixel)
Update: See also here
Here's what I did:
%# read the image (why is it so big?)
I = im2double(imread('https://p7o1zg.bay.livefilestore.com/y1pcQVsmssygbS4BLW24_X1E09BKt_Im-2yAxXBqWesC47gpv5bdFZf962T4it1roSaJkz5ChLBS0cxzQe6JfjDNrF7x-Cc12x8/test_matlab.tif?psid=1'));
%# add noise
I = imnoise(I,'gaussian',0,0.0012);
%# write tiff
imwrite(I,'withNoise.tif');
%# read the tiff again
I2 = imread('withNoise.tif');
class(I2) %# -- oopsie, it's uint8 now!
%# convert to uint16 as in original post
I = im2uint16(I);
%# writ again
imwrite(I,'withNoise16.png','bitDepth',16);
%# read it
I2 = imread('withNoise16.png');
%# compare
all(all(I==I2)) %# everything is equal
So there is no funky stuff going on in writing/reading the image (though you lose some information in the bit conversion - your original image only takes up about a third of the dynamic range, so you'll lose more information that if you stretched the contrast before conversion).
However, the image is 2k-by-2k. When I only look at the top right corner of the image (taking 500-by-500 pix), it is displayed the same in Matlab and other graphics programs. So I bet it's a matter of resampling your image that Matlab does differently from other programs. As #leonbloy suggests, Matlab may be doing nearest-neighbor resampling, while other programs would do interpolation.

Resources