Related
find pixels location that have value/color =white
for i=1:row
for j=1:colo
if i==x if the row of rgb image is the same of pixel location row
end
end
end
end
what's Wrong
You can use logical indexing.
For logical indexing to work, you need the mask (bw2) to be the same size as RGB.
Since RGB is 3D matrix, you need to duplicate bw2 three times.
Example:
%Read sample image.
RGB = imread('autumn.tif');
%Build mask.
bw2 = zeros(size(RGB, 1), size(RGB, 2));
bw2(1+(end-30)/2:(end+30)/2, 1+(end-30)/2:(end+30)/2) = 1;
%Convert bw2 mask to same dimensions as RGB
BW = logical(cat(3, bw2, bw2, bw2));
RGB(BW) = 255;
figure;imshow(RGB);
Result (just decoration):
In case you want to fix your for loops implementation, you can do it as follows:
[x, y] = find(bw2 == 1);
[row, colo, z]=size(RGB); %size of rgb image
for i=1:row
for j=1:colo
if any(i==x) %if the row of rgb image is the same of pixel location row
if any(j==y(i==x)) %if the colos of rgb image is the same of pixel loca colo
RGB(i,j,1)=255; %set Red color channel to 255
RGB(i,j,2)=255; %set Green color channel to 255
RGB(i,j,3)=255; %set Blue color channel to 255
end
end
end
end
[x, y] = find(bw2 == 1)
x and y are arrays unless there is only one pixel is white.
However, if i==x and if j==y are comparing a single number with an array. This is wrong.
As Anthony pointed out, x and y are arrays so i==x and j==y won't work as intended. Furthermore RGB(i,j) only uses the first two dimensions, but RGB images have three dimensions. Lastly, from an optimization point of view, the for-loops are unnecessary.
%% Create some mock data.
% Generate a black/white image.
bw2 = rand(10);
% Introduce some 1's in the BW image
bw2(1:5,1:5)=1;
% Generate a RGB image.
RGB = rand(10,10,3)*255;
%% Do the math.
% Build a mask from the bw2 image
bw_mask = bw2 == 1;
% Set RGB at bw_mask pixels to white.
RGB2 = bsxfun(#plus, bw_mask*255, bsxfun(#times, RGB, ~bw_mask)); % MATLAB 2016a and earlier
RGB2 = bw_mask*255 + RGB .* ~bw_mask; % MATLAB 2016b and later.
I am trying to find a way to crop from a circle object (Image A) the largest square that can fit inside it.
Can someone please explain/show me how to find the biggest square fit parameters of the white space inside the circle (Image I) and based on them crop the square in the original image (Image A).
Script:
A = imread('E:/CirTest/Test.jpg');
%imshow(A)
level = graythresh(A);
BW = im2bw(A,level);
%imshow(BW)
I = imfill(BW, 'holes');
imshow(I)
d = imdistline;
[centers, radii, metric] = imfindcircles(A,[1 500]);
imageCrop=imcrop(A, [BoxBottomX BoxBottomY NewX NewY]);
I have a solution for you but it requires a bit of extra work. What I would do first is use imfill but directly on the grayscale image. This way, noisy pixels in uniform areas get inpainted with the same intensities so that thresholding is easier. You can still use graythresh or Otsu's thresholding and do this on the inpainted image.
Here's some code to get you started:
figure; % Open up a new figure
% Read in image and convert to grayscale
A = rgb2gray(imread('http://i.stack.imgur.com/vNECg.jpg'));
subplot(1,3,1); imshow(A);
title('Original Image');
% Find the optimum threshold via Otsu
level = graythresh(A);
% Inpaint noisy areas
I = imfill(A, 'holes');
subplot(1,3,2); imshow(I);
title('Inpainted image');
% Threshold the image
BW = im2bw(I, level);
subplot(1,3,3); imshow(BW);
title('Thresholded Image');
The above code does the three operations that I mentioned, and we see this figure:
Notice that the thresholded image has border pixels that need to be removed so we can concentrate on the circular object. You can use the imclearborder function to remove the border pixels. When we do that:
% Clear off the border pixels and leave only the circular object
BW2 = imclearborder(BW);
figure; imshow(BW2);
... we now get this image:
Unfortunately, there are some noisy pixels, but we can very easily use morphology, specifically the opening operation with a small circular disk structuring element to remove these noisy pixels. Using strel with the appropriate structuring element in addition to imopen should help do the trick:
% Clear out noisy pixels
SE = strel('disk', 3, 0);
out = imopen(BW2, SE);
figure; imshow(out);
We now get:
This mask contains the locations of the circular object we now need to use to crop our original image. The last part is to determine the row and column locations using this mask to locate the top left and bottom right corner of the original image and we thus crop it:
% Find row and column locations of circular object
[row,col] = find(out);
% Find top left and bottom right corners
top_row = min(row);
top_col = min(col);
bottom_row = max(row);
bottom_col = max(col);
% Crop the image
crop = A(top_row:bottom_row, top_col:bottom_col);
% Show the cropped image
figure; imshow(crop);
We now get:
It's not perfect, but it will of course get you started. If you want to copy and paste this in its entirety and run this on your computer, here we are:
figure; % Open up a new figure
% Read in image and convert to grayscale
A = rgb2gray(imread('http://i.stack.imgur.com/vNECg.jpg'));
subplot(2,3,1); imshow(A);
title('Original Image');
% Find the optimum threshold via Otsu
level = graythresh(A);
% Inpaint noisy areas
I = imfill(A, 'holes');
subplot(2,3,2); imshow(I);
title('Inpainted image');
% Threshold the image
BW = im2bw(I, level);
subplot(2,3,3); imshow(BW);
title('Thresholded Image');
% Clear off the border pixels and leave only the circular object
BW2 = imclearborder(BW);
subplot(2,3,4); imshow(BW2);
title('Cleared Border Pixels');
% Clear out noisy pixels
SE = strel('disk', 3, 0);
out = imopen(BW2, SE);
% Show the final mask
subplot(2,3,5); imshow(out);
title('Final Mask');
% Find row and column locations of circular object
[row,col] = find(out);
% Find top left and bottom right corners
top_row = min(row);
top_col = min(col);
bottom_row = max(row);
bottom_col = max(col);
% Crop the image
crop = A(top_row:bottom_row, top_col:bottom_col);
% Show the cropped image
subplot(2,3,6);
imshow(crop);
title('Cropped Image');
... and our final figure is:
You can use bwdist with L_inf distance (aka 'chessboard') to get the axis-aligned distance to the edges of the region, thus concluding the dimensions of the largest bounded box:
bw = imread('http://i.stack.imgur.com/7yCaD.png');
lb = bwlabel(bw);
reg = lb==2; %// pick largest area
d = bwdist(~reg,'chessboard'); %// compute the axis aligned distance from boundary inward
r = max(d(:)); %// find the largest distance to boundary
[cy cx] = find(d==r,1); %// find the location most distant
boundedBox = [cx-r, cy-r, 2*r, 2*r];
And the result is
figure;
imshow(bw,'border','tight');
hold on;
rectangle('Position', boundedBox, 'EdgeColor','r');
Once you have the bounding box, you can use imcrop to crop the original image
imageCrop = imcrop(A, boundedBox);
Alternatively, you can
imageCrop = A(cy + (-r:r-1), cx + (-r:r-1) );
I have an image, size 213 x 145 pixels. I want to resize it to 128 x 128 pixels for example. I've already tried the code below:
i = imread ('alif1.png');
I = imresize (i, [128 128], 'bilinear');
OR
i = imread ('alif1.png');
I = imresize (i, [128 128], 'lanczos3');
it gave me a square image, but the image became disproportionate. However, I believe the aspect ratio was preserved.
I want to resize the image to a square shape without distorting or stretching the image, rather to pad/crop the white background instead. I still can't figure out the right code. I hope anyone could help.
any help will be very much appreciated :)
I = imread('alifi.png');
Crop image, specifying crop rectangle.
I2 = imcrop(I,[75 68 128 128]);
Size and position of the crop rectangle, specified as a four-element position vector of the form [xmin ymin width height].
for more understanding follow this(matlab ) and this(blog) links.
If you want to resize (not crop) the image and keep the aspect ratio (so you don't loose any part of the image AND it doesn't get distorted), you can first add margins to make the image squared.
You can achieve this using the function padarray, or just creating a new image of zeros and then adding your image in the appropiate coordinates.
Once your image is squared, you can resize it to 128x128 using imresize.
In order to add margins, you will have to see where to add them (top&bottom OR left&right).
Also since padarray adds the same amount of margins in both sides, you have to check if the number you need is even. If it's odd add first a last row (or column) of zeros to your image.
So basically you have three options:
Make the image squared by not preserving aspect ratio (which is what you already tried)
Cropping the image as suggested by #ShvetChakra and #bla (but you will loose some image info)
Add margins to the image and resize (but you will end up with a squared image with margins)
Magic doesn't exist so "you must choose, but choose wisely"
(Quote from Indiana Jones and the Last Crusade).
EDIT:
% Example with a 5x2 image, so an extra column will be added
% in order to use padarray.
im = [1 2; 3 4; 5 6; 7 8; 9 10];
nrows = size(a,1);
ncols = size(a,2);
d = abs(ncols-nrows); % difference between ncols and nrows:
if(mod(d,2) == 1) % if difference is an odd number
if (ncols > nrows) % we add a row at the end
im = [im; zeros(1, ncols)];
nrows = nrows + 1;
else % we add a col at the end
im = [im zeros(nrows, 1)];
ncols = ncols + 1;
end
end
if ncols > nrows
im = padarray(im, [(ncols-nrows)/2 0]);
else
im = padarray(im, [0 (nrows-ncols)/2]);
end
% Here im is a 5x5 matix, not perfectly centered
% because we added an odd number of columns: 3
How can I change the values of the R, G and B in an image manually using MATLAB?
The computation for a green color enhancement needs to be done so how do we access and change the values of RGB using MATLAB?
Assuming you are using imread to read in the image, an RGB image is stored as an M x N x 3 matrix, where M,N are the rows and columns of the image. This is essentially a 3D matrix, where each colour plane is in a particular dimension. The red plane is the first of the third dimensions, the green plane is the second, and the blue plane is the third. As such, you can do something like:
im = imread('onion.png'); % // Built-in to MATLAB
red = im(:,:,1); %// Red channel
green = im(:,:,2); % // Green channel
blue = im(:,:,3); % // Blue channel
You can also merge the planes back by doing: im2 = cat(3, red, green, blue); Now, you can manipulate any of these planes by themselves. If you want to grab a subset of the image, you can do:
imSubset = im(row1:row2, col1:col2, :);
This will grab all pixels between rows row1 to row2 and columns col1 to col2. You can then split up the image into their corresponding planes.
Now, if you want to manually change pixels, you simply access whichever rows and columns you want in each of the planes and set them to whatever you want. For example, if you wanted to set a particular region in your image to all yellow pixels, you can do this:
im(1:50,1:50,1) = 0;
im(1:50,1:50,2) = 255;
im(1:50,1:50,3) = 255;
imshow(im);
This should place a yellow square of 50 pixels wide in the top left corner. You can also do the subset approach by:
imSubset = im(1:50,1:50,:); %// Extract
imSubset(:,:,1) = 0; %// Set
imSubset(:,:,2) = 255;
imSubset(:,:,3) = 255;
im(1:50,1:50,:) = imSubset; %// Place back
If I can be a shameless promoter, take a look at my Introduction to Digital Image Processing using MATLAB slides - http://www.slideshare.net/rayryeng1/introduction-to-digital-image-processing-using-matlab
Good luck!
I have an image and I want to import this image to matlab. I am using the following code. The problem that I have is that when I convert the image to grayscale, everything will be changed and the converted image is not similar to original one. In another words, I want to keep the values (or let say the image) as it is in the original image. Is there any way for doing this?
I = imread('myimage.png');
figure, imagesc(I), axis equal tight xy
I2 = rgb2gray(I);
figure, imagesc(I2), axis equal tight xy
Your original image is already using a jet colormap. The problem is, when you convert it to grayscale, you lose some crucial information. See the image below.
In the original image you have a heatmap. Blue areas generally indicate "low value", whereas red areas indicate "high values". But when converted to grayscale, both areas indicate low value, as they aproach dark pixels (see the arrows).
A possible solution is this:
You take every pixel of your image, find the nearest (closest)
color value in the jet colormap and use its index as a gray value.
I will show you first the final code and the results. The explanation goes below:
I = im2double(imread('myimage.png'));
map = jet(256);
Irgb = reshape(I, size(I, 1) * size(I, 2), 3);
Igray = zeros(size(I, 1), size(I, 2), 'uint8');
for ii = 1:size(Irgb, 1)
[~, idx] = min(sum((bsxfun(#minus, Irgb(ii, :), map)) .^ 2, 2));
Igray(ii) = idx - 1;
end
clear Irgb;
subplot(2,1,1), imagesc(I), axis equal tight xy
subplot(2,1,2), imagesc(Igray), axis equal tight xy
Result:
>> whos I Igray
Name Size Bytes Class Attributes
I 110x339x3 894960 double
Igray 110x339 37290 uint8
Explanation:
First, you get the jet colormap, like this:
map = jet(256);
It will return a 256x3 colormap with the possible colors on the jet palette, where each row is a RGB pixel. map(1,:) would be kind of a dark blue, and map(256,:) would be kind of a dark red, as expected.
Then, you do this:
Irgb = reshape(I, size(I, 1) * size(I, 2), 3);
... to turn your 110x339x3 image into a 37290x3 matrix, where each row is a RGB pixel.
Now, for each pixel, you take the Euclidean distance of that pixel to the map pixels. You take the index of the nearest one and use it as a gray value. The minus one (-1) is because the index is in the range 1..256, but a gray value is in the range 0..255.
Note: the Euclidean distance takes a square root at the end, but since we are just trying to find the closest value, there is no need to do so.
EDIT:
Here is a 10x faster version of the code:
I = im2double(imread('myimage.png'));
map = jet(256);
[C, ~, IC] = unique(reshape(I, size(I, 1) * size(I, 2), 3), 'rows');
equiv = zeros(size(C, 1), 1, 'uint8');
for ii = 1:numel(equiv)
[~, idx] = min(sum((bsxfun(#minus, C(ii, :), map)) .^ 2, 2));
equiv(ii) = idx - 1;
end
Irgb = reshape(equiv(IC), size(I, 1), size(I, 2));
Irgb = Irgb(end:-1:1,:);
clear equiv C IC;
It runs faster because it exploits the fact that the colors on your image are restricted to the colors in the jet palette. Then, it counts the unique colors and only match them to the palette values. With fewer pixels to match, the algorithm runs much faster. Here are the times:
Before:
Elapsed time is 0.619049 seconds.
After:
Elapsed time is 0.061778 seconds.
In the second image, you're using the default colormap, i.e. jet. If you want grayscale, then try using colormap(gray).