I'm trying to create a mask (or similar result) in order to erase pieces of a binary image that are not attached to the object surrounded by the boundary. I saw this thread (http://www.mathworks.com/matlabcentral/answers/120579-converting-boundary-to-mask) to do this from bwboundaries, but I'm having trouble making suitable changes to it. My goal is to use this code to isolate the part of this topography map that is connected, and get rid of the extra pieces. I need to retain the structure inside of the bounded area, as I was then going to use bwboundaries to create additional boundary lines of the main object's "interior" structure.
The following is my code to first create the single boundary line by searching for the bottom left pixel of the black area to begin the trace. It just looks for the first column of the image that isn't completely white and selects the last black pixel. The second section was then to create the inner boundary lines. Note that I am attempting this two step process, but if there is a way to do it with only one I'd like to hear that solution as well. Ultimately I just want boundaries for the main, large black area and the holes inside of it, while getting rid of the extra pieces hanging around.
figName='Images/BookTrace_1';
BW = imread([figName,'.png']);
BW=im2bw(BW);
imshow(BW,[]);
for j=1:size(BW,2)
if sum(BW(:,j))~=sum(BW(:,1))
corner=BW(:,j);
c=j-1;
break
end
end
r=find(corner==0);
r=r(end);
outline = bwtraceboundary(BW,[r c],'W',8,Inf,'counterclockwise');
hold on;
plot(outline(:,2),outline(:,1),'g','LineWidth',2);
[B,L] = bwboundaries(BW);
hold on
for k = 1:length(B)
boundary = B{k};
plot(boundary(:,2), boundary(:,1), 'g', 'LineWidth', 2)
end
Any suggestions or tips are greatly appreciated. If there are questions, please let me know and I'll update the post. Thank you!
EDIT: For clarification, my end goal is as in the below image. I need to trace all of the outer and inner boundaries attached to the main object, while eliminating any spare small pieces that are not attached to it.
It's very simple. I actually wouldn't use the code above and use the image processing toolbox instead. There's a built-in function to remove any white pixels that touch the border of the image. Use the imclearborder function.
The function will return a new binary image where any pixels that were touching the borders of the image will be removed. Given your code, it's very simply:
out = imclearborder(BW);
Using the above image as an example, I'm going to threshold it so that the green lines are removed... or rather merged with the other white pixels, and I'll call the above function:
BW = imread('http://i.stack.imgur.com/jhLOw.png'); %// Read from StackOverflow
BW = im2bw(BW); %// Convert to binary
out = imclearborder(BW); %// Remove pixels along border
imshow(out); %// Show image
We get:
If you want the opposite effect, where you want to retain the boundaries and remove everything else inside, simply create a new image by copying the original one and use the output from the above to null these pixel locations.
out2 = BW; %// Make copy
out2(out) = 0; %// Set pixels not belonging to boundary to 0
imshow(out2); %// Show image
We thus get:
Edit
Given the above desired output, I believe I know what you want now. You wish to fill in the holes for each group of pixels and trace along the boundary of the desired result. The fact that we have this split up into two categories is going to be useful. For those objects that are in the interior, use the imfill function and specify the holes option to fill in any of the black holes so that they're white. For the objects that exterior, this will need a bit of work. What I would do is invert the image so that pixels that are black become white and vice-versa, then use the bwareaopen function to clear away any pixels whose area is below a certain amount. This will remove those small isolated black regions that are along the border of the exterior regions. Once you're done, re-invert the image. The effect of this is that the small holes will be eliminated. I chose a threshold of 500 pixels for the area... seems to work well.
Therefore, using the above variables as reference, do this:
%// Fill holes for both regions separately
out_fill = imfill(out, 'holes');
out2_fill = ~bwareaopen(~out2, 500);
%// Merge together
final_out = out_fill | out2_fill;
This is what we get:
If you want a nice green border like in your example to illustrate this point, you can do this:
perim = bwperim(final_out);
red = final_out;
green = final_out;
blue = final_out;
red(perim) = 0;
blue(perim) = 0;
out_colour = 255*uint8(cat(3, red, green, blue));
imshow(out_colour);
The above code finds the perimeter of the objects, then we create a new image where the red and blue channels along the perimeter are set to 0, while setting the green channel to 255.
We get this:
You can ignore the green pixel border that surrounds the image. That's just a side effect with the way I'm finding the perimeter along the objects in the image. In fact, the image you supplied to me had a white pixel border that surrounds the whole region, so I'm not sure if that's intended or if that's part of the whole grand scheme of things.
To consolidate into a working example so that you can copy and paste into MATLAB, here's all of the code in one code block:
%// Pre-processing
BW = imread('http://i.stack.imgur.com/jhLOw.png'); %// Read from StackOverflow
BW = im2bw(BW); %// Convert to binary
out = imclearborder(BW); %// Remove pixels along border
%// Obtain pixels that are along border
out2 = BW; %// Make copy
out2(out) = 0; %// Set pixels not belonging to boundary to 0
%// Fill holes for both regions separately
out_fill = imfill(out, 'holes');
out2_fill = ~bwareaopen(~out2, 500);
%// Merge together
final_out = out_fill | out2_fill;
%// Show final output
figure;
imshow(final_out);
%// Bonus - Show perimeter of output in green
perim = bwperim(final_out);
red = final_out;
green = final_out;
blue = final_out;
red(perim) = 0;
blue(perim) = 0;
out_colour = 255*uint8(cat(3, red, green, blue));
figure;
imshow(out_colour);
Related
I processed my input image and the result is below. I just need the characters. I tried but can't remove the noise surrounding the characters.
A simple erosion with a small structuring element, like a 3 x 3 square may work where you would eliminate the small white noise profile and thus make the characters darker. You can also take advantage of the fact that the areas that are black that are not characters are connected to the boundaries of the image. You can remove these from the image by removing areas connected to the boundaries.
Therefore, perform an erosion first using imerode, then you will need to remove the boundaries using imclearborder but this requires that the pixels touching the border are white. Therefore, use the inverse of the output from imerode into the function, then inverse it again.
Something like this will work and I'll read your image from Stack Overflow directly:
% Read the image and threshold in case
im = imread('https://i.stack.imgur.com/Hl6Y9.jpg');
im = im > 200;
% Erode
out = imerode(im, strel('square', 3));
% Remove the border and find inverse
out = ~imclearborder(~out);
We get this image now:
There are some isolated black holes near the B that you may not want. You can do some additional post-processing by using bwareaopen to remove islands that are below a certain area. I chose this to be 50 pixels from experimentation. You'll have to do this on the inverse of the output from imclearborder:
% Read the image and threshold in case
im = imread('https://i.stack.imgur.com/Hl6Y9.jpg');
im = im > 200;
% Erode
out = imerode(im, strel('square', 3));
% Remove the border
bor = imclearborder(~out);
% Remove small areas and inverse
out = ~bwareaopen(bor, 50);
We now get this:
I have an image which has three classes. Each class is labelled by number {2,3,4} and background is {1}. I want to draw contours of each class in an image. I tried the MATLAB code below, but the contours look overlap together (blue and green, yellow and green). How can I draw a contour per class?
Img=ones(128,128);
Img(20:end-20,20:end-20)=2;
Img(30:end-30,30:end-30)=3;
Img(50:end-50,50:end-50)=4;
%%Img(60:end-60,60:end-60)=3; %% Add one more rectangular
imagesc(Img);colormap(gray);hold on; axis off;axis equal;
[c2,h2] = contour(Img==2,[0 1],'g','LineWidth',2);
[c3,h3] = contour(Img==3,[0 1],'b','LineWidth',2);
[c4,h4] = contour(Img==4,[0 1],'y','LineWidth',2);
hold off;
This is my expected result
This is happening because each "class" is defined as a hollow square in terms of its shape. Therefore, when you use contour it traces over all boundaries of the square. Take for example just one class when you plot this on the figure. Specifically, take a look at your first binary image you create with Img == 2. We get this image:
Therefore, if you called contour on this shape, you'd actually be tracing the boundaries of this object. It makes more sense now doesn't it? If you repeated this for the rest of your classes, this is the reason why the contour lines are overlapping in colour. The innermost part of the hollow square is overlapping with the outermost part of another square. Now when you call contour the first time you actually will get this:
As you can see, "class 2" is actually defined to be the hollowed out grey square. If you want to achieve what you desire, one way is to fill in each hollow square then apply contour to this result. Assuming you have the image processing toolbox, use imfill with the 'holes' option at each step:
Img=ones(128,128);
Img(20:end-20,20:end-20)=2;
Img(50:end-50,50:end-50)=3;
Img(30:end-30,30:end-30)=3;
Img(35:end-35,35:end-35)=3;
Img(50:end-50,50:end-50)=4;
imagesc(Img);colormap(gray);hold on; axis off;axis equal;
%// New
%// Create binary mask with class 2 and fill in the holes
im = Img == 2;
im = imfill(im, 'holes');
%// Now draw contour
[c2,h2] = contour(im,[0 1],'g','LineWidth',2);
%// Repeat for the rest of the classes
im = Img == 3;
im = imfill(im, 'holes');
[c3,h3] = contour(im,[0 1],'b','LineWidth',2);
im = Img == 4;
im = imfill(im, 'holes');
[c4,h4] = contour(im,[0 1],'y','LineWidth',2);
hold off;
We now get this:
I have an image
I have obtained its phase only reconstructed image using fftn function.
My aim is
Using phase only reconstruction of the given image,i will get only edges and lines
Then i want to color these lines and edges say with red or blue color in the phase only reconstructed image.
Then i want to put this "colored" image on original image so that edges and lines from the original images can be high-lightened with respective red or blue color.
But when i run the code, i get following error
'Subscript indices must either be real positive integers or logicals.
Error in sagar_image (line 17)
superimposing(ph) = 255;'
So what should I do?
clc;
close all;
clear all;
img=imread('D:\baby2.jpg');
figure,imshow(img);
img=rgb2gray(img);
fourier_transform=fftn(img);%take fourier transform of gray scale image
phase=exp(1j*angle(fourier_transform));
phase_only=ifftn(phase);%compute phase only reconstruction
figure,imshow(phase_only,[]);
ph=im2uint8(phase_only);%convert image from double to uint8
superimposing = img;
superimposing(ph) = 255;
figure,
imshow(superimposing,[]),
superimposing(ph) = 255 could mean -
1. ph contains indices of superimposing that you wish to paint white(255).
2. ph is a 'logical' image of the same size as superimposing, every pixel that evaluates to 'true' in ph would be painted white in superimposing.
What you meant was probably:
threshold = 0.2;
superimposing(real(phase_only) > threshold) = 255;
If you want to fuse the two images and see them one on top of the other use imfuse:
imshow(imfuse(real(phase_only), img, 'blend'))
I know this thread about converting black color to white and white to black simultaneously.
I would like to convert only black to white.
I know this thread about doing this what I am asking but I do not understand what goes wrong.
Picture
Code
rgbImage = imread('ecg.png');
grayImage = rgb2gray(rgbImage); % for non-indexed images
level = graythresh(grayImage); % threshold for converting image to binary,
binaryImage = im2bw(grayImage, level);
% Extract the individual red, green, and blue color channels.
redChannel = rgbImage(:, :, 1);
greenChannel = rgbImage(:, :, 2);
blueChannel = rgbImage(:, :, 3);
% Make the black parts pure red.
redChannel(~binaryImage) = 255;
greenChannel(~binaryImage) = 0;
blueChannel(~binaryImage) = 0;
% Now recombine to form the output image.
rgbImageOut = cat(3, redChannel, greenChannel, blueChannel);
imshow(rgbImageOut);
Which gives
Where seems to be something wrong in red color channel.
The Black color is just (0,0,0) in RGB so its removal should mean to turn every (0,0,0) pixel to white (255,255,255).
Doing this idea with
redChannel(~binaryImage) = 255;
greenChannel(~binaryImage) = 255;
blueChannel(~binaryImage) = 255;
Gives
So I must have misunderstood something in Matlab. The blue color should not have any black. So this last image is strange.
How can you turn only black color to white?
I want to keep the blue color of the ECG.
If I understand you properly, you want to extract out the blue ECG plot while removing the text and axes. The best way to do that would be to examine the HSV colour space of the image. The HSV colour space is great for discerning colours just like the way humans do. We can clearly see that there are two distinct colours in the image.
We can convert the image to HSV using rgb2hsv and we can examine the components separately. The hue component represents the dominant colour of the pixel, the saturation denotes the purity or how much white light there is in the pixel and the value represents the intensity or strength of the pixel.
Try visualizing each channel doing:
im = imread('http://i.stack.imgur.com/cFOSp.png'); %// Read in your image
hsv = rgb2hsv(im);
figure;
subplot(1,3,1); imshow(hsv(:,:,1)); title('Hue');
subplot(1,3,2); imshow(hsv(:,:,2)); title('Saturation');
subplot(1,3,3); imshow(hsv(:,:,3)); title('Value');
Hmm... well the hue and saturation don't help us at all. It's telling us the dominant colour and saturation are the same... but what sets them apart is the value. If you take a look at the image on the right, we can tell them apart by the strength of the colour itself. So what it's telling us is that the "black" pixels are actually blue but with almost no strength associated to it.
We can actually use this to our advantage. Any pixels whose values are above a certain value are the values we want to keep.
Try setting a threshold... something like 0.75. MATLAB's dynamic range of the HSV values are from [0-1], so:
mask = hsv(:,:,3) > 0.75;
When we threshold the value component, this is what we get:
There's obviously a bit of quantization noise... especially around the axes and font. What I'm going to do next is perform a morphological erosion so that I can eliminate the quantization noise that's around each of the numbers and the axes. I'm going to make it the mask a bit large to ensure that I remove this noise. Using the image processing toolbox:
se = strel('square', 5);
mask_erode = imerode(mask, se);
We get this:
Great, so what I'm going to do now is make a copy of your original image, then set any pixel that is black from the mask I derived (above) to white in the final image. All of the other pixels should remain intact. This way, we can remove any text and the axes seen in your image:
im_final = im;
mask_final = repmat(mask_erode, [1 1 3]);
im_final(~mask_final) = 255;
I need to replicate the mask in the third dimension because this is a colour image and I need to set each channel to 255 simultaneously in the same spatial locations.
When I do that, this is what I get:
Now you'll notice that there are gaps in the graph.... which is to be expected due to quantization noise. We can do something further by converting this image to grayscale and thresholding the image, then filling joining the edges together by a morphological dilation. This is safe because we have already eliminated the axies and text. We can then use this as a mask to index into the original image to obtain our final graph.
Something like this:
im2 = rgb2gray(im_final);
thresh = im2 < 200;
se = strel('line', 10, 90);
im_dilate = imdilate(thresh, se);
mask2 = repmat(im_dilate, [1 1 3]);
im_final_final = 255*ones(size(im), class(im));
im_final_final(mask2) = im(mask2);
I threshold the previous image that we got without the text and axes after I convert it to grayscale, and then I perform dilation with a line structuring element that is 90 degrees in order to connect those lines that were originally disconnected. This thresholded image will contain the pixels that we ultimately need to sample from the original image so that we can get the graph data we need.
I then take this mask, replicate it, make a completely white image and then sample from the original image and place the locations we want from the original image in the white image.
This is our final image:
Very nice! I had to do all of that image processing because your image basically has quantization noise to begin with, so it's going to be a bit harder to get the graph entirely. Ander Biguri in his answer explained in more detail about colour quantization noise so certainly check out his post for more details.
However, as a qualitative measure, we can subtract this image from the original image and see what is remaining:
imshow(rgb2gray(abs(double(im) - double(im_final_final))));
We get:
So it looks like the axes and text are removed fine, but there are some traces in the graph that we didn't capture from the original image and that makes sense. It all has to do with the proper thresholds you want to select in order to get the graph data. There are some trouble spots near the beginning of the graph, and that's probably due to the morphological processing that I did. This image you provided is quite tricky with the quantization noise, so it's going to be very difficult to get a perfect result. Also, these thresholds unfortunately are all heuristic, so play around with the thresholds until you get something that agrees with you.
Good luck!
What's the problem?
You want to detect all black parts of the image, but they are not really black
Example:
Your idea (or your code):
You first binarize the image, selecting the pixels that ARE something against the pixels that are not. In short, you do: if pixel>level; pixel is something
Therefore there is a small misconception you have here! when you write
% Make the black parts pure red.
it should read
% Make every pixel that is something (not background) pure red.
Therefore, when you do
redChannel(~binaryImage) = 255;
greenChannel(~binaryImage) = 255;
blueChannel(~binaryImage) = 255;
You are doing
% Make every pixel that is something (not background) white
% (or what it is the same in this case, delete them).
Therefore what you should get is a completely white image. The image is not completely white because there has been some pixels that were labelled as "not something, part of the background" by the value of level, in case of your image around 0.6.
A solution that one could think of is manually setting the level to 0.05 or similar, so only black pixels will be selected in the gray to binary threholding. But this will not work 100%, as you can see, the numbers have some very "no-black" values.
How would I try to solve the problem:
I would try to find the colour you want, extract just that colour from the image, and then delete outliers.
Extract blue using HSV (I believe I answered you somewhere else how to use HSV).
rgbImage = imread('ecg.png');
hsvImage=rgb2hsv(rgbImage);
I=rgbImage;
R=I(:,:,1);
G=I(:,:,2);
B=I(:,:,3);
th=0.1;
R((hsvImage(:,:,1)>(280/360))|(hsvImage(:,:,1)<(200/360)))=255;
G((hsvImage(:,:,1)>(280/360))|(hsvImage(:,:,1)<(200/360)))=255;
B((hsvImage(:,:,1)>(280/360))|(hsvImage(:,:,1)<(200/360)))=255;
I2= cat(3, R, G, B);
imshow(I2)
Once here we would like to get the biggest blue part, and that would be our signal. Therefore the best approach seems to first binarize the image taking all blue pixels
% Binarize image, getting all the pixels that are "blue"
bw=im2bw(rgb2gray(I2),0.9999);
And then using bwlabel, label all the independent pixel "islands".
% Label each "blob"
lbl=bwlabel(~bw);
The label most repeated will be the signal. So we find it and separate the background from the signal using that label.
% Find the blob with the highes amount of data. That will be your signal.
r=histc(lbl(:),1:max(lbl(:)));
[~,idxmax]=max(r);
% Profit!
signal=rgbImage;
signal(repmat((lbl~=idxmax),[1 1 3]))=255;
background=rgbImage;
background(repmat((lbl==idxmax),[1 1 3]))=255;
Here there is a plot with the signal, background and difference (using the same equation as #rayryang used)
Here is a variation on #rayryeng's solution to extract the blue signal:
%// retrieve picture
imgRGB = imread('http://i.stack.imgur.com/cFOSp.png');
%// detect axis lines and labels
imgHSV = rgb2hsv(imgRGB);
BW = (imgHSV(:,:,3) < 1);
BW = imclose(imclose(BW, strel('line',40,0)), strel('line',10,90));
%// clear those masked pixels by setting them to background white color
imgRGB2 = imgRGB;
imgRGB2(repmat(BW,[1 1 3])) = 255;
%// show extracted signal
imshow(imgRGB2)
To get a better view, here is the detected mask overlayed on top of the original image (I'm using imoverlay function from the File Exchange):
figure
imshow(imoverlay(imgRGB, BW, uint8([255,0,0])))
Here is a code for this:
rgbImage = imread('ecg.png');
redChannel = rgbImage(:, :, 1);
greenChannel = rgbImage(:, :, 2);
blueChannel = rgbImage(:, :, 3);
black = ~redChannel&~greenChannel&~blueChannel;
redChannel(black) = 255;
greenChannel(black) = 255;
blueChannel(black) = 255;
rgbImageOut = cat(3, redChannel, greenChannel, blueChannel);
imshow(rgbImageOut);
black is the area containing the black pixels. These pixels are set to white in each color channel.
In your code you use a threshold and a grayscale image so of course you have much bigger area of pixels that is set to white resp. red color. In this code only pixel that contain absolutly no red, green and blue are set to white.
The following code does the same with a threshold for each color channel:
rgbImage = imread('ecg.png');
redChannel = rgbImage(:, :, 1);
greenChannel = rgbImage(:, :, 2);
blueChannel = rgbImage(:, :, 3);
black = (redChannel<150)&(greenChannel<150)&(blueChannel<150);
redChannel(black) = 255;
greenChannel(black) = 255;
blueChannel(black) = 255;
rgbImageOut = cat(3, redChannel, greenChannel, blueChannel);
imshow(rgbImageOut);
A biologist friend of mine asked me if I could help him make a program to count the squama (is this the right translation?) of lizards.
He sent me some images and I tried some things on Matlab. For some images it's much harder than other, for example when there are darker(black) regions. At least with my method. I'm sure I can get some useful help here. How should I improve this? Have I taken the right approach?
These are some of the images.
I got the best results by following Image Processing and Counting using MATLAB. It's basically turning the image into Black and white and then threshold it. But I did add a bit of erosion.
Here's the code:
img0=imread('C:...\pic.png');
img1=rgb2gray(img0);
%The output image BW replaces all pixels in the input image with luminance greater than level with the value 1 (white) and replaces all other pixels with the value 0 (black). Specify level in the range [0,1].
img2=im2bw(img1,0.65);%(img1,graythresh(img1));
imshow(img2)
figure;
%erode
se = strel('line',6,0);
img2 = imerode(img2,se);
se = strel('line',6,90);
img2 = imerode(img2,se);
imshow(img2)
figure;
imshow(img1, 'InitialMag', 'fit')
% Make a truecolor all-green image. I use this later to overlay it on top of the original image to show which elements were counted (with green)
green = cat(3, zeros(size(img1)),ones(size(img1)), zeros(size(img1)));
hold on
h = imshow(green);
hold off
%counts the elements now defined by black spots on the image
[B,L,N,A] = bwboundaries(img2);
%imshow(img2); hold on;
set(h, 'AlphaData', img2)
text(10,10,strcat('\color{green}Objects Found:',num2str(length(B))))
figure;
%this produces a new image showing each counted element and its count id on top of it.
imshow(img2); hold on;
colors=['b' 'g' 'r' 'c' 'm' 'y'];
for k=1:length(B),
boundary = B{k};
cidx = mod(k,length(colors))+1;
plot(boundary(:,2), boundary(:,1), colors(cidx),'LineWidth',2);
%randomize text position for better visibility
rndRow = ceil(length(boundary)/(mod(rand*k,7)+1));
col = boundary(rndRow,2); row = boundary(rndRow,1);
h = text(col+1, row-1, num2str(L(row,col)));
set(h,'Color',colors(cidx),'FontSize',14,'FontWeight','bold');
end
figure;
spy(A);
And these are some of the results. One the top-left corner you can see how many were counted.
Also, I think it's useful to have the counted elements marked in green so at least the user can know which ones have to be counted manually.
There is one route you should consider: watershed segmentation. Here is a quick and dirty example with your first image (it assumes you have the IP toolbox):
raw=rgb2gray(imread('lCeL8.jpg'));
Icomp = imcomplement(raw);
I3 = imhmin(Icomp,20);
L = watershed(I3);
%%
imagesc(L);
axis image
Result shown with a colormap:
You can then count the cells as follows:
count = numel(unique(L));
One of the advantages is that it can be directly fed to regionprops and give you all the nice details about the individual 'squama':
r=regionprops(L, 'All');
imshow(raw);
for k=2:numel(r)
if r(k).Area>100 % I chose 100 to filter out the objects with a small are.
rectangle('Position',r(k).BoundingBox, 'LineWidth',1, 'EdgeColor','b', 'Curvature', [1 1]);
end
end
Which you could use to monitor over/under segmentation:
Note: special thanks to #jucestain for helping with the proper access to the fields in the r structure here