I have to create a contrast map about images. The method is easy: estimate the contrast (C=SD/Mean) in a 5x5 (sliding) window. Move the window pixel by pixel trough the entire picture and save the results in a new matrix.
I created this script to solve the problem, but it is too slow. The base picture is a 1024x1024 16-bit .tif image. The process is ~40-50 sec long, but other programs solve it in less than 3 secs. How can I speed up my script?
My original script:
M=imread('image.tif');
for S=1:2 SM1_b=size(M,1); SM2_b=size(M,2);
M(:,SM2_b+1)=M(:,SM2_b);
M(SM1_b+1,:)=[M(SM1_b,1:SM2_b) M(SM1_b,SM2_b)];
end
M=[M(1,1:SM2_b+1);M(1,1:SM2_b+1);M]; M=[M(1:(size(M,1)),1) M(1:(size(M,1)),1) M]; clear S SM1_b SM2_b
KPic=zeros(1024);
for i=1:1024 for j=1:1024
KPic(i,j)=std2(M(i:i+4,j:j+4))/mean2(M(i:i+4,j:j+4));
end
end
One way could be building an image of the mean using a window size of 5x5, then a local standard deviation filter with a window size of 5x5. After this is done you can calculate the result fairly quickly.
M = imread('image.tif');
h = fspecial('average', 5);
mean_im = filter2(M, h);
std_im = stdfilt(M, 5);
KPic = std_im ./ mean_im;
Related
I am learning image analysis and trying to average set of color images and get standard deviation at each pixel
I have done this, but it is not by averaging RGB channels. (for ex rchannel = I(:,:,1))
filelist = dir('dir1/*.jpg');
ims = zeros(215, 300, 3);
for i=1:length(filelist)
imname = ['dir1/' filelist(i).name];
rgbim = im2double(imread(imname));
ims = ims + rgbim;
end
avgset1 = ims/length(filelist);
figure;
imshow(avgset1);
I am not sure if this is correct. I am confused as to how averaging images is useful.
Also, I couldn't get the matrix holding standard deviation.
Any help is appreciated.
If you are concerned about finding the mean RGB image, then your code is correct. What I like is that you converted the images using im2double before accumulating the mean and so you are making everything double precision. As what Parag said, finding the mean image is very useful especially in machine learning. It is common to find the mean image of a set of images before doing image classification as it allows the dynamic range of each pixel to be within a normalized range. This allows the training of the learning algorithm to converge quickly to the optimum solution and provide the best set of parameters to facilitate the best accuracy in classification.
If you want to find the mean RGB colour which is the average colour over all images, then no your code is not correct.
You have summed over all channels individually which is stored in sumrgbims, so the last step you need to do now take this image and sum over each channel individually. Two calls to sum in the first and second dimensions chained together will help. This will produce a 1 x 1 x 3 vector, so using squeeze after this to remove the singleton dimensions and get a 3 x 1 vector representing the mean RGB colour over all images is what you get.
Therefore:
mean_colour = squeeze(sum(sum(sumrgbims, 1), 2));
To address your second question, I'm assuming you want to find the standard deviation of each pixel value over all images. What you will have to do is accumulate the square of each image in addition to accumulating each image inside the loop. After that, you know that the standard deviation is the square root of the variance, and the variance is equal to the average sum of squares subtracted by the mean squared. We have the mean image, now you just have to square the mean image and subtract this with the average sum of squares. Just to be sure our math is right, supposing we have a signal X with a mean mu. Given that we have N values in our signal, the variance is thus equal to:
Source: Science Buddies
The standard deviation would simply be the square root of the above result. We would thus calculate this for each pixel independently. Therefore you can modify your loop to do that for you:
filelist = dir('set1/*.jpg');
sumrgbims = zeros(215, 300, 3);
sum2rgbims = sumrgbims; % New - for standard deviation
for i=1:length(filelist)
imname = ['set1/' filelist(i).name];
rgbim = im2double(imread(imname));
sumrgbims = sumrgbims + rgbim;
sum2rgbims = sum2rgbims + rgbim.^2; % New
end
rgbavgset1 = sumrgbims/length(filelist);
% New - find standard deviation
rgbstdset1 = ((sum2rgbims / length(filelist)) - rgbavgset.^2).^(0.5);
figure;
imshow(rgbavgset1, []);
% New - display standard deviation image
figure;
imshow(rgbstdset1, []);
Also to make sure, I've scaled the display of each imshow call so the smallest value gets mapped to 0 and the largest value gets mapped to 1. This does not change the actual contents of the images. This is just for display purposes.
I need to Fourier phase scramble a whole load of JPEG images in Matlab.
Using the following script I am able to phase scramble colour images, but when I try it on grayscale it does not work. I have installed the image processing in Matlab already.
Any advice on how to modify code so that it phase scrambles grayscale would be much appreciated, as I have tried reducing ImSize from 3 to 2, and reducing (:,:,layer) to (:,layer) - but still no joy!
Thanks in advance,
Maria
Im = mat2gray(double(imread('c:\nick\matlab\randomphase\Bear.jpg')));
%read and rescale (0-1) image
ImSize = size(Im);
RandomPhase = angle(fft2(rand(ImSize(1), ImSize(2))));
%generate random phase structure
for layer = 1:ImSize(3)
ImFourier(:,:,layer) = fft2(Im(:,:,layer));
%Fast-Fourier transform
Amp(:,:,layer) = abs(ImFourier(:,:,layer));
%amplitude spectrum
Phase(:,:,layer) = angle(ImFourier(:,:,layer));
%phase spectrum
Phase(:,:,layer) = Phase(:,:,layer) + RandomPhase;
%add random phase to original phase
ImScrambled(:,:,layer) = ifft2(Amp(:,:,layer).*exp(sqrt(-1)*(Phase(:,:,layer))));
%combine Amp and Phase then perform inverse Fourier
end
ImScrambled = real(ImScrambled); %get rid of imaginery part in image (due to rounding error)
imwrite(ImScrambled,'BearScrambled.jpg','jpg');
imshow(ImScrambled)
In your code you are using ImSize = size(Im); and ImSize(3), the problem is for grayscale images ImSize has only two elements because it's a 2d matrix.
Instead use size(Im,3) which returns 1, as matlab always assumes additional singleton dimensions. For the same reason, your further code is already compartible to grayscale images, as Im(:,:,layer) with layer=1 returns the full image, which is what you want in this case.
I have 8 images and I want to show them in a scale-space format shown below. The original image height and width is 256. Then on right side of original image, at every level the size is reduced by 2. Like here, image height and width is 256. On the right side of original image, height and width is 128, 64, 32, 16, 8, 4 , 2.
I have all the images in their respective dimensions. I just want to know how do I arrange the images according the pattern shown below. Thanks in advance.
This looks like you are trying to build a scale-space and displaying the results to the user. This isn't a problem to do. Bear in mind that you will have to do this with for loops, as I don't see how you will be able to do this unless you copy and paste several lines of code. Actually, I'm going to use a while loop, and I'll tell you why soon.
In any case, you need to declare an output image that has as many rows as the original image, but the columns will be 1.5 times the original image to accommodate for the images on the right.
First, write code that places the original image on the left side, and the version that is half the size on the right. Once you do this, you write a for loop, use indexing to place the images in the right spots until you run out of scales, and you need to keep track of where the next image starts and next image ends. Bear in mind that the origin of where to write the next images after the first subsampled one will start at the column location of the original image, and the row is right where the previous one ends. As an example, let's use the cameraman.tif image that is exactly 256 x 256, but I will write code so that it fits any image resolution you want. When I subsample the image, I will use imresize in MATLAB to help with the resizing of the image, and I'll specify a sampling factor of 0.5 to denote the subsampling by 2. The reason why I would use a while loop instead, is because we can keep looping and resizing until one of the dimensions of the resized image is 1. When this is the case, there are no more scales to process, and so we can exit.
As such:
%// Read in image and get dimensions
im = imread('cameraman.tif');
[rows,cols] = size(im);
%// Declare output image
out = zeros(rows, round(1.5*cols), 'uint8');
out(:,1:cols) = im; %// Place original image to the left
%// Find first subsampled image, then place in output image
im_resize = imresize(im, 0.5, 'bilinear');
[rows_resize, cols_resize] = size(im_resize);
out(1:rows_resize,cols+1:cols+cols_resize) = im_resize;
%// Keep track of the next row we need to write at
rows_counter = rows_resize + 1;
%// For the rest of the scales...
while (true)
%// Resize the image
im_resize = imresize(im_resize, 0.5, 'bilinear');
%// Get the dimensions
[rows_resize, cols_resize] = size(im_resize);
%// Write to the output
out(rows_counter:rows_counter+rows_resize-1, cols+1:cols+cols_resize) = ...
im_resize;
%// Move row counter over for writing the next image
rows_counter = rows_counter + rows_resize;
%// If either dimension gives us 1, there are no more scales
%// to process, so exit.
if rows_resize == 1 || cols_resize == 1
break;
end
end
%// Show the image
figure;
imshow(out);
This is the image I get:
I have to process a very large image ( say 10 MB image file or even more).I have to remove artifacts and dead pixels in MATLAB
I have read about Block Processing of Large Images, but have no idea how to apply it to a 16 bit image.
I am referring to removal of pixels which have highest value into the average value of surrounding pixel .my code is not working on my image which is 80 MB of size
numIterations = 30;
avgPrecisionSize = 16; % smaller is better, but takes longer
% Read in the image grayscale:
originalImage = double(rgb2gray(imread('C:\Documents and Settings\admin\Desktop\TM\image5.tif')));
% get the bad pixels where = 0 and dilate to make sure they get everything:
badPixels = (originalImage == 0);
badPixels = imdilate(badPixels, ones(12));
%# Create a big gaussian and an averaging kernel to use:
G = fspecial('gaussian',[1 1]*100,50);
H = fspecial('average', [1,1]*avgPrecisionSize);
%# User a big filter to get started:
newImage = imfilter(originalImage,G,'same');
newImage(~badPixels) = originalImage(~badPixels);
% Now average to
for count = 1:numIterations
newImage = imfilter(newImage, H, 'same');
newImage(~badPixels) = originalImage(~badPixels);
end
%% Plot the results
figure(123);
clf;
% Display the mask:
subplot(1,2,1);
imagesc(badPixels);
axis image
title('Region Of the Bad Pixels');
% Display the result:
subplot(1,2,2);
imagesc(newImage);
axis image
set(gca,'clim', [0 255])
title('Infilled Image');
colormap gray
newImage2 = roifill(originalImage, badPixels);
figure(44);
clf;
imagesc(newImage2);
colormap gray
You are doing a few things which are obvious problems (but it might depend specifically on how far you can get into the code before you run out of memory)
1) You are immediately converting the whole image to double
2) You are identifying certain pixels which you want to replace, but passing the whole image to imfilter and then throwing away (presumably) most of the output:
newImage = imfilter(originalImage,G,'same'); % filter across the entire image
newImage(~badPixels) = originalImage(~badPixels); % replace all the good pixels!
Without converting to double, why not first check where the bad pixels are, do your processing on subregions of the appropriate size around those pixels (the subregions can be converted to double and back), and then reassemble the image at the end?
blockproc may work if you can write your filtering option as a function which takes in an image area and returns the correct area - you'll have to use the border_size option appropriately and make sure your function just returns the original image without bothering to do any filtering if it hits a block with no bad pixels in. You can even have it write out to file as well.
I have a image that includes object and some unwanted region (small dots). I want to remove it. Hence, I use some morphological operator example 'close' to remove. But it is not perfect. Do you have other way to remove more clear? You can download example image at raw image
This is my code
load Image.mat %load Img value
Img= bwmorph(Img,'close');
imshow(Img);
You might prefer a faster and vectorized approach using bsxfun along with the information obtained from bwlabel itself.
Note: bsxfun is memory intensive, but that's precisely what makes it faster. Therefore, watch out for the size of B1 in the code below. This method will get slower once it reaches the memory constraints set by the system, but until then it provides good speedup over the regionprops method.
Code
[L,num] = bwlabel( Img );
counts = sum(bsxfun(#eq,L(:),1:num));
B1 = bsxfun(#eq,L,permute(find(counts>threshold),[1 3 2]));
NewImg = sum(B1,3)>0;
EDIT 1: Few benchmarks for comparisons between bsxfun and regionprops approaches are discussed next.
Case 1
Benchmark Code
Img = imread('coins.png');%%// This one is chosen as it is available in MATLAB image library
Img = im2bw(Img,0.4); %%// 0.4 seemed good to make enough blobs for this image
lb = bwlabel( Img );
threshold = 2000;
disp('--- With regionprops method:');
tic,out1 = regionprops_method1(Img,lb,threshold);toc
clear out1
disp('---- With bsxfun method:');
tic,out2 = bsxfun_method1(Img,lb,threshold);toc
%%// For demo, that we have rejected enough unwanted blobs
figure,
subplot(211),imshow(Img);
subplot(212),imshow(out2);
Output
Benchmark Results
--- With regionprops method:
Elapsed time is 0.108301 seconds.
---- With bsxfun method:
Elapsed time is 0.006021 seconds.
Case 2
Benchmark Code (Only the changes from Case 1 are listed)
Img = imread('snowflakes.png');%%// This one is chosen as it is available in MATLAB image library
Img = im2bw(Img,0.2); %%// 0.2 seemed good to make enough blobs for this image
threshold = 20;
Output
Benchmark Results
--- With regionprops method:
Elapsed time is 0.116706 seconds.
---- With bsxfun method:
Elapsed time is 0.012406 seconds.
As pointed out earlier, I have tested with other bigger images and with a lot of unwanted blobs, for which bsxfun method doesn't provide any improvement over regionprops method. Due to the unavailability of any such bigger images in MATLAB library, they couldn't be discussed here. To sum up, it could be suggested to use either of these two approaches based on the input features. It would be interesting to see how these two approaches perform for your input images.
You can use regionprops and bwlabel to select all regions that are smaller than a certain area (=number of pixels)
lb = bwlabel( Img );
st = regionprops( lb, 'Area', 'PixelIdxList' );
toRemove = [st.Area] < threshold; % fix your threshold here
newImg = Img;
newImg( vertcat( st(toRemove).PixelIdxList ) ) = 0; % remove