Write a MATLAB code that reads a gray scale image and generates the flipped image of original image.enter image description here
i am trying this code but is not giving me correct flipped image.Help will be much appreciated.Thankyou
clear all
clc
a=imread('pout.tif');
[r,c]=size(a);
for i=r:-1:1
k=1;
for j=1:1:c
temp=a(k,j);
result(k,j)=a(i,j);
result(i,j)=temp;
k=k+1;
end
end
subplot(1,2,1), imshow(a)
subplot(1,2,2),imshow(result)
What you're doing with indices is kind of unclear. You should also pre-allocate memory for the result.
clear all
clc
a=imread('pout.tif');
[r,c]=size(a);
result = a; % preallocate memory for result
for i=1:r
for j=1:c
result(r-i+1,j)=a(i,j);
end
end
subplot(1,2,1), imshow(a)
subplot(1,2,2),imshow(result)
You can use basic indexing to flip a matrix. 2D case (gray-scale image):
a = a(:,end:-1:1); % horizontal flip
a = a(end:-1:1,:); % vertical flip
a = a(end:-1:1,end:-1:1); % flip both: 180 degree rotation
For the 3D case (color image) add a 3rd index ::
a = a(:,end:-1:1,:); % horizontal flip
Related
I have a specific question to ask about the intensity adjustment for image processing. I need high constraint value to find small gaps in the image which is shown as a red circle in the image. I used a manual threshold value 0.99 to convert the grayscale image to binary image for other processing methods. However, as the illumination on the surface did not distribute evenly, some parts of the image is lost. I used the adaptive method suggested by Matlab, however, the results is similar to a global threshold graythresh.
I will show my code and result below.
I0 = imread('1_2.jpg');
[R,C,K] = size(I0);
if K==1
I1 = I0;
else
I1 = rgb2gray(I0);
end
%Adjsut image to get a standar binary picture
%Adjust image intensity value
I1 = imadjust(I1,[0.1 0.7],[]);
BW0 = im2bw(I1,0.99);
figure;
BW0 = bwareaopen(BW0,10000);
%Fill non_crack hole error
BW0 = bwareaopen(1-BW0,500);
BW0 = 1-BW0;
imshow(BW0);
After this process, only half of the image will be left. I want a whole image, with locally intensity threshold but show the same feature as the high-level threshold. What can I do?
Thanks
Try adaptthresh:
I0 = imread('1_2.jpg');
[R,C,K] = size(I0);
if K==1
I1 = I0;
else
I1 = rgb2gray(I0);
end
T = adaptthresh(I1, 0.4); %adaptive thresholding
% Convert image to binary image, specifying the threshold value.
BW = imbinarize(I1,T);
% Display the original image with the binary version, side-by-side.
figure
imshowpair(I1, BW, 'montage')
I need to know how to align an image in Matlab for further work.
for example I have the next license plate image and I want to recognize all
the digits.
my program works for straight images so, I need to align the image and then
preform the optical recognition system.
The method should be as much as universal that fits for all kinds of plates and in all kinds of angles.
EDIT: I tried to do this with Hough Transform but I didn't Succeed. anybody can help me do to this?
any help will be greatly appreciated.
The solution was first hinted at by #AruniRC in the comments, then implemented by #belisarius in Mathematica. The following is my interpretation in MATLAB.
The idea is basically the same: detect edges using Canny method, find prominent lines using Hough Transform, compute line angles, finally perform a Shearing Transform to align the image.
%# read and crop image
I = imread('http://i.stack.imgur.com/CJHaA.png');
I = I(:,1:end-3,:); %# remove small white band on the side
%# egde detection
BW = edge(rgb2gray(I), 'canny');
%# hough transform
[H T R] = hough(BW);
P = houghpeaks(H, 4, 'threshold',ceil(0.75*max(H(:))));
lines = houghlines(BW, T, R, P);
%# shearing transforma
slopes = vertcat(lines.point2) - vertcat(lines.point1);
slopes = slopes(:,2) ./ slopes(:,1);
TFORM = maketform('affine', [1 -slopes(1) 0 ; 0 1 0 ; 0 0 1]);
II = imtransform(I, TFORM);
Now lets see the results
%# show edges
figure, imshow(BW)
%# show accumlation matrix and peaks
figure, imshow(imadjust(mat2gray(H)), [], 'XData',T, 'YData',R, 'InitialMagnification','fit')
xlabel('\theta (degrees)'), ylabel('\rho'), colormap(hot), colorbar
hold on, plot(T(P(:,2)), R(P(:,1)), 'gs', 'LineWidth',2), hold off
axis on, axis normal
%# show image with lines overlayed, and the aligned/rotated image
figure
subplot(121), imshow(I), hold on
for k = 1:length(lines)
xy = [lines(k).point1; lines(k).point2];
plot(xy(:,1), xy(:,2), 'g.-', 'LineWidth',2);
end, hold off
subplot(122), imshow(II)
In Mathematica, using Edge Detection and Hough Transform:
If you are using some kind of machine learning toolbox for text recognition, try to learn from ALL plates - not only aligned ones. Recognition results should be equally well if you transform the plate or dont, since by transforming, no new informations according to the true number will enhance the image.
If all the images have a dark background like that one, you could binarize the image, fit lines to the top or bottom of the bright area and calculate an affine projection matrix from the line gradient.
A biologist friend of mine asked me if I could help him make a program to count the squama (is this the right translation?) of lizards.
He sent me some images and I tried some things on Matlab. For some images it's much harder than other, for example when there are darker(black) regions. At least with my method. I'm sure I can get some useful help here. How should I improve this? Have I taken the right approach?
These are some of the images.
I got the best results by following Image Processing and Counting using MATLAB. It's basically turning the image into Black and white and then threshold it. But I did add a bit of erosion.
Here's the code:
img0=imread('C:...\pic.png');
img1=rgb2gray(img0);
%The output image BW replaces all pixels in the input image with luminance greater than level with the value 1 (white) and replaces all other pixels with the value 0 (black). Specify level in the range [0,1].
img2=im2bw(img1,0.65);%(img1,graythresh(img1));
imshow(img2)
figure;
%erode
se = strel('line',6,0);
img2 = imerode(img2,se);
se = strel('line',6,90);
img2 = imerode(img2,se);
imshow(img2)
figure;
imshow(img1, 'InitialMag', 'fit')
% Make a truecolor all-green image. I use this later to overlay it on top of the original image to show which elements were counted (with green)
green = cat(3, zeros(size(img1)),ones(size(img1)), zeros(size(img1)));
hold on
h = imshow(green);
hold off
%counts the elements now defined by black spots on the image
[B,L,N,A] = bwboundaries(img2);
%imshow(img2); hold on;
set(h, 'AlphaData', img2)
text(10,10,strcat('\color{green}Objects Found:',num2str(length(B))))
figure;
%this produces a new image showing each counted element and its count id on top of it.
imshow(img2); hold on;
colors=['b' 'g' 'r' 'c' 'm' 'y'];
for k=1:length(B),
boundary = B{k};
cidx = mod(k,length(colors))+1;
plot(boundary(:,2), boundary(:,1), colors(cidx),'LineWidth',2);
%randomize text position for better visibility
rndRow = ceil(length(boundary)/(mod(rand*k,7)+1));
col = boundary(rndRow,2); row = boundary(rndRow,1);
h = text(col+1, row-1, num2str(L(row,col)));
set(h,'Color',colors(cidx),'FontSize',14,'FontWeight','bold');
end
figure;
spy(A);
And these are some of the results. One the top-left corner you can see how many were counted.
Also, I think it's useful to have the counted elements marked in green so at least the user can know which ones have to be counted manually.
There is one route you should consider: watershed segmentation. Here is a quick and dirty example with your first image (it assumes you have the IP toolbox):
raw=rgb2gray(imread('lCeL8.jpg'));
Icomp = imcomplement(raw);
I3 = imhmin(Icomp,20);
L = watershed(I3);
%%
imagesc(L);
axis image
Result shown with a colormap:
You can then count the cells as follows:
count = numel(unique(L));
One of the advantages is that it can be directly fed to regionprops and give you all the nice details about the individual 'squama':
r=regionprops(L, 'All');
imshow(raw);
for k=2:numel(r)
if r(k).Area>100 % I chose 100 to filter out the objects with a small are.
rectangle('Position',r(k).BoundingBox, 'LineWidth',1, 'EdgeColor','b', 'Curvature', [1 1]);
end
end
Which you could use to monitor over/under segmentation:
Note: special thanks to #jucestain for helping with the proper access to the fields in the r structure here
When I display my reconstructed images, they are just white. Is there something obviously wrong with my program?
The reconstructed images should have the values of the downsampled image at one pixel in the upsampled 2x2 pixel block. The interpolation method I'm using here is simply taking the value from one row above and filling the next row with it, repeating this process for the columns.
%% Image Resampling
close all; clear all; clc;
s_dir=pwd;
cd Images;
I=imread('aivazovsky78g.tif','tif');
cd(s_dir)
[N M]=size(I);
figure;
imshow(I)
axis image; hold on;
for k=1:4
pause(1)
I=I(1:2:N, 1:2:M);
[N M]=size(I);
image(I)
end
%% Image Reconstruction
Irec=zeros(2*size(I));
for r=1:5
for n=1:N-1
for m=1:M-1
Irec(2*n-1,2*m-1)=I(n,m);
end
end
[N M]=size(Irec);
for n=2:2:N
for m=2:2:M
Irec(n,:)=Irec(n-1,:);
Irec(:,m)=Irec(:,m-1);
end
end
I=Irec;
figure;
imshow(I)
end
You may use B = imresize(A, scale, 'box') where a scale of 2 doubles the amount of pixels in x and y. The z dimension will still have the same value.
The resizing method box will copy the initial pixel value (i, j) to its 3 new neighbors (i+1, j), (i, j+1), and (i+1, j+1) - the same method you programmed.
Not the most efficient way, but here is a working code:
% 256x256 grayscale image
I = imread('cameraman.tif');
% double in size
I2 = zeros(2*size(I),class(I));
for i=1:2:size(I2,1)
for j=1:2:size(I2,2)
I2([i i+1],[j j+1]) = I((i-1)/2 + 1, (j-1)/2 + 1);
end
end
% compare against #Magla's solution
I3 = imresize(I,2,'box');
isequal(I2,I3)
Hi I am trying to get the boundary orientation of an image from the image gradient or canny edge detector as in equation 11 of http://www.cs.swan.ac.uk/~csjason/papers/xxmm-pami2008.pdf
I currently have:
clear all
Img = imread('littlecircle.png');
Img = Img(:,:,1);
Img = double(Img);
w = size(Img,1); % width size
h = size(Img,2); % height size
[Ix,Iy] = gradient(Img); %gradient of image
i=1; %iteration for magnetic field loop
b=0; %initialize b to zero
% Magnetic Field
for pxRow = 1:h % fixed pixel row
for pxCol = 1:w % fixed pixel column
for r = 1:h % row of distant pixel
for c = 1:w % column of distant pixel
O(c,r) = [-Iy(c,r),Ix(c,r)]; % O(x) = (-1).^lambda(-Iy(x),Ix(x)) --ERROR HERE
end
end
B(i) = {O}; % filling a cell array with results. read below
i = i+1;
end
end
However I am getting a subscript indices mismatch when storing into O(c,r). Why is this? and also if anyone thinks there is a better way to do this from the paper then I would love to here it. Thanks.
You could do the canny + orientation detection in one step, by modifying matlab's canny edge detection code or modify an alternative like this. Canny works by determining the orientation on each step, so you could modify the canny code to also return an orientation map for each pixel.