I need to Fourier phase scramble a whole load of JPEG images in Matlab.
Using the following script I am able to phase scramble colour images, but when I try it on grayscale it does not work. I have installed the image processing in Matlab already.
Any advice on how to modify code so that it phase scrambles grayscale would be much appreciated, as I have tried reducing ImSize from 3 to 2, and reducing (:,:,layer) to (:,layer) - but still no joy!
Thanks in advance,
Maria
Im = mat2gray(double(imread('c:\nick\matlab\randomphase\Bear.jpg')));
%read and rescale (0-1) image
ImSize = size(Im);
RandomPhase = angle(fft2(rand(ImSize(1), ImSize(2))));
%generate random phase structure
for layer = 1:ImSize(3)
ImFourier(:,:,layer) = fft2(Im(:,:,layer));
%Fast-Fourier transform
Amp(:,:,layer) = abs(ImFourier(:,:,layer));
%amplitude spectrum
Phase(:,:,layer) = angle(ImFourier(:,:,layer));
%phase spectrum
Phase(:,:,layer) = Phase(:,:,layer) + RandomPhase;
%add random phase to original phase
ImScrambled(:,:,layer) = ifft2(Amp(:,:,layer).*exp(sqrt(-1)*(Phase(:,:,layer))));
%combine Amp and Phase then perform inverse Fourier
end
ImScrambled = real(ImScrambled); %get rid of imaginery part in image (due to rounding error)
imwrite(ImScrambled,'BearScrambled.jpg','jpg');
imshow(ImScrambled)
In your code you are using ImSize = size(Im); and ImSize(3), the problem is for grayscale images ImSize has only two elements because it's a 2d matrix.
Instead use size(Im,3) which returns 1, as matlab always assumes additional singleton dimensions. For the same reason, your further code is already compartible to grayscale images, as Im(:,:,layer) with layer=1 returns the full image, which is what you want in this case.
Related
I can't find information online about the intensity rescaling of a 3D image made of several 2D images.
I'm looking for the same function as imadjust which only works for 2D images.
My 3D image is the combination of 2D images stacked together but I have to process the 3D image and not the 2D images one by one.
I can't loop imadjust because I want to process the images as one, to consider all the information available, in all directions.
For applying imadjust for set of 2D grayscale images taking the whole value into account, this trick might work
a = imread('pout.tif');
a = imresize(a,[256 256]); %// re-sizing to match image b's dimension
b = imread('cameraman.tif');
Im = cat(3,a,b);
%//where a,b are separate grayscale images of same dimensions
%// if you have the images separately you could edit this line to
%// Im = cat(2,a,b);
%// and also avoid the next step
%// reshaping into a 2D matrix to apply imadjust
Im = reshape(Im,size(Im,1),[]);
out = imadjust(Im); %// applying imadjust
%// finally reshaping back to its original shape
out = reshape(out,size(a,1),size(a,2),[]);
To check:
x = out(:,:,1);
y = out(:,:,2);
As you could see from the Workspace image, the first image (variable x) is not re-scaled to 0-255 as its previous range (variable a) was not near the 0 point.
WorkSpace:
Edit: You could do this as a one-step process like this: (as the other answer suggests)
%// reshaping to single column using colon operator and then using imadjust
%// then reshaping it back
out = reshape(imadjust(Image3D(:)),size(Image3D));
Edit2:
As you have image as cell arrays in I2, try this:
I2D = cat(2,I2{:})
The only way to do this for 3D image is to treat the data as a vector and then reshape back.
Something like this:
%create a random 3D image.
x = rand(10,20,30);
%adjust intensity range
x_adj = imadjust( x(:), size(x) );
I have to create a contrast map about images. The method is easy: estimate the contrast (C=SD/Mean) in a 5x5 (sliding) window. Move the window pixel by pixel trough the entire picture and save the results in a new matrix.
I created this script to solve the problem, but it is too slow. The base picture is a 1024x1024 16-bit .tif image. The process is ~40-50 sec long, but other programs solve it in less than 3 secs. How can I speed up my script?
My original script:
M=imread('image.tif');
for S=1:2 SM1_b=size(M,1); SM2_b=size(M,2);
M(:,SM2_b+1)=M(:,SM2_b);
M(SM1_b+1,:)=[M(SM1_b,1:SM2_b) M(SM1_b,SM2_b)];
end
M=[M(1,1:SM2_b+1);M(1,1:SM2_b+1);M]; M=[M(1:(size(M,1)),1) M(1:(size(M,1)),1) M]; clear S SM1_b SM2_b
KPic=zeros(1024);
for i=1:1024 for j=1:1024
KPic(i,j)=std2(M(i:i+4,j:j+4))/mean2(M(i:i+4,j:j+4));
end
end
One way could be building an image of the mean using a window size of 5x5, then a local standard deviation filter with a window size of 5x5. After this is done you can calculate the result fairly quickly.
M = imread('image.tif');
h = fspecial('average', 5);
mean_im = filter2(M, h);
std_im = stdfilt(M, 5);
KPic = std_im ./ mean_im;
I have to process a very large image ( say 10 MB image file or even more).I have to remove artifacts and dead pixels in MATLAB
I have read about Block Processing of Large Images, but have no idea how to apply it to a 16 bit image.
I am referring to removal of pixels which have highest value into the average value of surrounding pixel .my code is not working on my image which is 80 MB of size
numIterations = 30;
avgPrecisionSize = 16; % smaller is better, but takes longer
% Read in the image grayscale:
originalImage = double(rgb2gray(imread('C:\Documents and Settings\admin\Desktop\TM\image5.tif')));
% get the bad pixels where = 0 and dilate to make sure they get everything:
badPixels = (originalImage == 0);
badPixels = imdilate(badPixels, ones(12));
%# Create a big gaussian and an averaging kernel to use:
G = fspecial('gaussian',[1 1]*100,50);
H = fspecial('average', [1,1]*avgPrecisionSize);
%# User a big filter to get started:
newImage = imfilter(originalImage,G,'same');
newImage(~badPixels) = originalImage(~badPixels);
% Now average to
for count = 1:numIterations
newImage = imfilter(newImage, H, 'same');
newImage(~badPixels) = originalImage(~badPixels);
end
%% Plot the results
figure(123);
clf;
% Display the mask:
subplot(1,2,1);
imagesc(badPixels);
axis image
title('Region Of the Bad Pixels');
% Display the result:
subplot(1,2,2);
imagesc(newImage);
axis image
set(gca,'clim', [0 255])
title('Infilled Image');
colormap gray
newImage2 = roifill(originalImage, badPixels);
figure(44);
clf;
imagesc(newImage2);
colormap gray
You are doing a few things which are obvious problems (but it might depend specifically on how far you can get into the code before you run out of memory)
1) You are immediately converting the whole image to double
2) You are identifying certain pixels which you want to replace, but passing the whole image to imfilter and then throwing away (presumably) most of the output:
newImage = imfilter(originalImage,G,'same'); % filter across the entire image
newImage(~badPixels) = originalImage(~badPixels); % replace all the good pixels!
Without converting to double, why not first check where the bad pixels are, do your processing on subregions of the appropriate size around those pixels (the subregions can be converted to double and back), and then reassemble the image at the end?
blockproc may work if you can write your filtering option as a function which takes in an image area and returns the correct area - you'll have to use the border_size option appropriately and make sure your function just returns the original image without bothering to do any filtering if it hits a block with no bad pixels in. You can even have it write out to file as well.
I'm looking to create an intensity matrix in Matlab from an intensity image plot (saved as a jpeg (RGB) file) that was made using what appears to be the jet colormap from Matlab. I am essentially trying to reverse engineer the numerical data from the plot. The original image along with the color bar is linked (I do not have enough reputation to insert images).
http://i.imgur.com/BmryO6W.png
I initially thought this could be done with the rgb2gray command, but it produces the following image with the jet colormap applied which does not match the original image.
http://i.imgur.com/RlBei2z.png
As far as I can tell, the only route available from here is to try matching the RGB value for each pixel to a value in the colormap lookup table. Any suggestions on if this the fastest approach?
It looks like your method using rgb2gray is almost working, apart from the scale. Because the colormap is scaled automatically to the contents of your plot, I think you will have to re-scale it manually (unless you can automatically detect the tick labels on the colorbar). You can do this using the following formula:
% Some random data like yours
x = rand(1000) * 256;
% Scale data to fit your range
xRange = [min(x(:)) max(x(:))];
scaleRange = [-10 10];
y = (x - xRange(1)) * diff(scaleRange) / diff(xRange) + scaleRange(1);
You can check the operation's success with
>> [min(y(:)) max(y(:))]
ans =
-10 10
I am trying to plot a set of data in grayscale. However, the image i get seems to be always blue.
I have a set of data, albedo that ranges from [0, 0.068], which is a 1X1 double.
My code is:
for all px,py
albedoMax = 0.0679; albedoMin = 0;
out_im(px,py) = 1/(albedoMax-albedoMin)*(albedo - albedoMin);
imshow(out_im);
drawnow;
end
Basically px,py are the image coordinates that i have to iterate over, and the formula is trying to map the input range of [0, 0.068] to [0 1]. However, by running this code, i notice that the output is always blueish. I was wondering what went wrong.
Thanks for the help.
Can't you make use of the rgb2gray function?
What you are making is one layer of the RGB image.
If you are creating a homogeneous blue image with constant color then the normalization is wrong. But if it is just the matter of being blue instead of being gray then just convert it using :
ImGray = rgb2gray(Im);
Do not forget to distribute the pixels like a grid/mesh, to fill all the image not just a part of it.