How can I automatically identify multiple lines in an image? - image

Given a binary image comprising angled lines, how could I automatically identify as much lines as possible? Using the bwtraceboundary function in Matlab, I have been able to identify one of them, manually providing the starting coordinates of the identified line.
Could anyone point out a way to loop the matrix of ones and zeros to automatically identify as many as possible?
Here's an example image:
% Read the image
I = imread('./synthetic.jpg');
figure(1)
BW = im2bw(I, 0.7);
imshow(BW2,[]);
c = 255; % X coordinate of a manually identified line
r = 490; % Y coordinate of a manually identified line
contour = bwtraceboundary(BW,[c r],'NE',8, 1000,'clockwise');
imshow(BW,[]);
hold on;
plot(contour(:,2),contour(:,1),'g','LineWidth',2);
From the above code we get:

This is a small example of how to use Hough transform for lines in MATLAB, with some denoising prior for your images.
This code does not detect all lines, and you may need to tune it/change it a bit, and that will need some learning on what is going on, which is out of the scope for StackOverflow. Perhaps someone with more knowledge can find a better method:
I=rgb2gray(imread('https://i.stack.imgur.com/fTWHh.jpg'));
I = imgaussfilt(I,1);
I=I([90:370],:);
BW = edge(I,'canny');
[H,T,R] = hough(BW);
P = houghpeaks(H,5,'threshold',ceil(0.3*max(H(:))));
lines = houghlines(BW,T,R,P,'FillGap',5,'MinLength',3);
figure, imshow(I), hold on
max_len = 0;
for k = 1:length(lines)
xy = [lines(k).point1; lines(k).point2];
plot(xy(:,1),xy(:,2),'LineWidth',2,'Color','green');
% Plot beginnings and ends of lines
plot(xy(1,1),xy(1,2),'x','LineWidth',2,'Color','yellow');
plot(xy(2,1),xy(2,2),'x','LineWidth',2,'Color','red');
% Determine the endpoints of the longest line segment
len = norm(lines(k).point1 - lines(k).point2);
if ( len > max_len)
max_len = len;
xy_long = xy;
end
end

Related

How to remove straight lines from a Matlab image?

I have a matlab image from a matrix named Two_dim as shown in the figure below.
I would like to remove all 3 of the bottom straight horizontal lines from the image. I looked up on Stackoverflow to use regionprops to eliminate horizontal lines and obtained this code. But that doesn't seem to remove the lines.
rp = regionprops(Two_dim, 'PixelIdxList', 'Eccentricity', 'Orientation');
rp = rp([rp.Eccentricity]>0.95 & (abs([rp.Orientation])<2 | abs([rp.Orientation])>89));
Two_dim(vertcat(rp.PixelIdxList)) = false;
Here is an answer using Hough transform approach. I will add some more explanations to the code below later:
% I crop only the intresting part for illustration:
BW = edge(Two_dim(1:1000,:),'canny');
subplot 131
imagesc(Two_dim(1:1000,:))
title('Original image')
axis xy
[H,theta,rho] = hough(BW); % preform Hough transform
subplot 132
P = houghpeaks(H,10,'NHoodSize',[1 1]); % find the peaks in the transformation
lines_found = houghlines(BW,theta,rho,P,...
'FillGap',50,'MinLength',1); % convert the peaks to line objects
imagesc(Two_dim(1:1000,:)), hold on
result = Two_dim(1:1000,:);
for k = 1:length(lines_found)
% extract one line:
xy = [lines_found(k).point1; lines_found(k).point2];
% Plot the detected lines:
plot(xy(:,1),xy(:,2),'LineWidth',1,'Color','green');
% remove the lines from the image:
% note that I take a buffer of 3 to the 'width' of the line
result(xy(1,2):xy(1,2)+3,xy(1,1):xy(2,1)) = 0;
end
title('Detected lines')
axis xy
subplot 133
imagesc(result)
title('Corrected image')
axis xy
The output:
You could look at the row intensity sums. They will stand out as long as the lines stay horizontal.
grayI = rgb2gray(I);
rowSums = sum(grayI,2);
plot(rowSums);
filterRows = rowSums > 1*10^5
I(filterRows,:,:) = 255;
regionprops needs a binary image with each pixel being one of two values.
You can do e.g.
Two_dim(Two_dim<0.5) = 0;
Two_dim(Two_dim>=0.5) = 1; # the actual value doesn't matter
or
Two_dim = logical(Two_dim);

Find out which blob has the pixel location [x,y]?

I have a labelled blob image using bwlabel, I want to find a blob which has the pixel location [x,y] and display it by removing the rest of blobs.
Here is the code I wrote, but it doesn't give correct answer, please fix this
[y, x] = ginput(1);
x = round(x);
y = round(y); % here x and y is a location of blob i want to keep
BW = bwlabel(newImgg,4) ; % labelled image contains several blobs
% figure, imshow(BW, [])
props = regionprops(logical(BW),'all');
while(1)
for k = 2:length(props)
if ismember([x,y],props(k,1).PixelList) == [1, 1];
keeperIndex = k;
break
end
end
break
end
keeperBlobsImage = ismember(BW, keeperIndex);
keeperBlobsImage = imfill(keeperBlobsImage,'holes');
figure, imshow(keeperBlobsImage,[])
Thanks,
Gopi
I do not currently have a MATLAB license, so I wouldn't be able to test this on my machine, I've also been away from MATLAB syntax for a while. Here's an idea:
From MATLAB's documentation, PixelList is an array where each row is formatted [x,y,...], depending on your dimensions.
Working with your image I'm assuming PixelList has the format [x,y]
Looping through PixelList, keep track of the indices you want to discard. If you measured n pixels:
discardList = []
for i = 1:n
if (PixelList(i) != [target_x,target_y]
discardList=[discardList,i]
end
end
newPixelList = PixelList
newPixelList(discardList) = []
Again, I haven't used MATLAB for a decent amount of time now, so I apologize for any problems in the syntax (brackets, loops, and conditionals)
EDIT/UPDATE:
According to the MATLAB's documentation, it shows bwlabel being used only on a BW image. So make sure you're doing that, I guess.
Also, on the output of regionprops you should have WeightedCentroid.
From your ginput, find the region where the centroid is the closest.
My suggestion would be to use the vision.BlobAnalysis System Object
[y,x] = ginput(1)
bA = vision.BlobAnalysis;
centroids = step(bA,BWImage);
using the documentation make sure you turn off all "output ports" of the system object, and keep the centroid output port on.
d = 1e10;
d2 = 0;
dArr = [x,y;0,0]
cIndex=0;
for i = 1:length(centroids)
dArr(2,:) = centroids(i,:);
d2 = pdist(dArr);
if (d2<d)
d = d2;
cIndex = i;
end
end
The variable cIndex will contain the index of the blob you need. You can run blob analysis and isolate it from the rest

Image differences detection in matlab

I'm trying to find the broken ligaments for these two photos. Because the patten it got I can use the conv2 function find the general broken areas. However, it is really hard for me think how to make it tell the exact broken ligaments. Can you guys give me some idea for how to find which ligaments are broken please?
Because I'm new to this website, I can not post more photos with 2-D convolution results.
Original Picture
Broken Picture
Make a region growing algorithm inside each perfect square.
Once you get that, calculate the area of that section.
Once you find this, calculate the remaining areas. The larger values will be the broken ligaments :)
img = imread('unbroke.jpg');
level = graythresh(rgb2gray(img));
BW = im2bw(rgb2gray(img),level);
BW2= imdilate(imerode(BW, ones(5)), ones(5));
BW3 = bwmorph(BW2,'remove');
figure, imshow(BW2), hold on[![enter image description here][1]][1]
[H,T,R] = hough(BW2);
P = houghpeaks(H,15,'threshold',ceil(0.3*max(H(:))));
x = T(P(:,2)); y = R(P(:,1));
lines = houghlines(BW2,T,R,P,'FillGap',5,'MinLength',7);
max_len = 0;
for k = 1:length(lines)
xy = [lines(k).point1; lines(k).point2];
plot(xy(:,1),xy(:,2),'LineWidth',2,'Color','green');
% Plot beginnings and ends of lines
plot(xy(1,1),xy(1,2),'x','LineWidth',2,'Color','yellow');
plot(xy(2,1),xy(2,2),'x','LineWidth',2,'Color','red');
% Determine the endpoints of the longest line segment
len = norm(lines(k).point1 - lines(k).point2);
if ( len > max_len)
max_len = len;
xy_long = xy;
end
end
lines from unbroken image
lines from broken image
Now, you know what the line segments are do some matching. Or else find pairs of segments that would be connected (same slope + same x/y intercept) within a threshold.
This may be an interesting way to do this too. I saved the second image only as 'image.jpg'.
I = imread('image.jpg');
J = imbinarize(rgb2gray(I)); % Threshold to get a BW image.
BW = bwpropfilt(~J, 'Area', [35001, 1013283]);
imshow(BW)
shows
For selecting the area thresholds easily, I used https://www.mathworks.com/help/images/calculate-region-properties-using-image-region-analyzer.html
If you don't have a recent MATLAB version where imbinarize or bwpropfilt doesn't exist, you can use equivalent thresholding functions, and regionprops to extract all objects within the area range.

Read the corresponding image in matlab

I calculated the distance between an image A1 and different images like image1,image2,image3 and image4 based on its hierarchicalCentroid. dist_1_1{ii} contains 4 values. I want to find the minimum value present in the dist_1_1{ii}.But I shows the value 1 and also i want to show the image which gives minimum value. Please help me. Thanks in advance
%% demo
clc,clear all,close all
plotFlag = 1;
depth = 6;
alef1 = im2bw(imread('C1.bmp')); %% Binary image
vec1 = hierarchicalCentroid(alef1,depth,plotFlag);
% subplot(1,3,1);
A=[];
vec2=[];
dist_1_1=[];
for ii=1:4
A{ii} = imread(['image' num2str(ii) '.bmp']);
% subplot(1,3,2);
vec2{ii} = hierarchicalCentroid(A{ii},depth,plotFlag);
%subplot(1,3,3);
%vec3 = hierarchicalCentroid(tav,depth,plotFlag);
% vec4=hierarchicalCentroid(A,depth,plotFlag);
% vec5=hierarchicalCentroid(A,depth,plotFlag);
dist_1_1{ii} = sum((vec1 - vec2{ii}) .^ 2);
[~,I] = min(dist_1_1{ii});
figure;
subplot(1,2,1);imshow(alef1);
subplot(1,2,2);imshow(A{I});
end
Considering that your images are named such as image1.png, image2.png,...
first, read and store the images in a cell
for ii=1:n
A{ii} = imread(['image' num2str(ii) '.png']);
end
Then compute the similarity between the image A1 and other images:
ind = computeSimilarity(A1,A); % here you compute the similarity and
(of course you would need a for-loop.)
After you have stored the values in the ind vector:
ind = [0.76,1.96,2.96];
Then find the index of a minimum value and choose the image accordingly
[~,I] = min(ind);
figure;
subplot(1,2,1);imshow(A1);
subplot(1,2,2);imshow(A{I});
What should be corrected in your code:
First of all, avoid using cell when it is not necessary and define it correctly when you using it. You cannot define a cell array like A=[]. You should do it like this: A=cell(2,3). For instance, for storing the vector of the descriptors you do not need a cell, just store them as a matrix, as I did.
Second of all when posting your code here, remove the unnecessary parts such as commented plots and commands.
And then, try to modify your code as follows, I might made some mistake about the dimensions, but you can get the main idea.
and also remember that you do not need to check each distance inside the loop. Calculate the vectors first and then find the distances in one step, as I did.
depth = 6;
alef1 = im2bw(imread('C1.bmp'));
vec1 = hierarchicalCentroid(alef1,depth,0);
A=cell(1,4);
vMatrix=zeros(4,length(vec1));
for ii=1:4
A{1,ii} = imread(['image' num2str(ii) '.bmp']);
vecMatrix(ii,:) = hierarchicalCentroid(A{1,ii},depth,0);
end
dist = sum((repmat(vec1,4,1) - vMatrix) .^ 2,2);
[~,I] = min(dist);

How do I efficiently created a BW mask for this microscopic image?

So some background. I was tasked to write a matlab program to count the number yeast cells inside visible-light microscopic images. To do that I think the first step will be cell segmentation. Before I got the real experiment image set, I developed an algorithm use a test image set utilizing watershed. Which looks like this:
The first step of watershed is generating a BW mask for the cells. Then I would generate a bwdist image with imposed local minimums generated from the BW mask. With that I can generate the watershed easily.
As you can see my algorithm rely on the successful generation of BW mask. Because I need to generate the bwdist image and markers from it. Originally, I generate the BW mask following the following steps:
generate the Local standard deviation of image sdImage = stdfilt(grayImage, ones(9))
Use BW thresholding to generate the initial BW mask binaryImage = sdImage < 8;
use imclearborder to clear the background. Use some other code to add the cells on the border back.
Background finished. Here is my problem
But today I received the new real data sets. The image resolution is much smaller and the light condition is different from the test image set. The color depth is also much smaller. These make my algorithm useless. Here is it:
Using stdfilt failed to generate a good clean images. Instead it generate stuff like this (Note: I have adjusted parameters for the stdfilt function and the BW threshold value, following is the best result I can get) :
As you can see there are light pixels in the center of the cells that not necessary darker than the membrane. Which lead the bw thresholding generate stuff like this:
The new bw image after bw thresholding have either incomplete membrane or segmented cell centers and make them unsuitable to the other steps.
I only start image processing recently and have no idea how should I proceed. If you have an idea please help me! Thanks!
For your convience, I have attached a link from dropbox for a subset of the images
I think there's a fundamental problem in your approach. Your algorithm uses stdfilt in order to binarize the image. But what that essentially means is you're assuming there is there is low "texture" in the background and within the cell. This works for your first image. However, in your second image there is a "texture" within the cell, so this assumption is broken.
I think a stronger assumption is that there is a "ring" around each cell (valid for both images you posted). So I took the approach of detecting this ring instead.
So my approach is essentially:
Detect these rings (I use a 'log' filter and then binarize based on positive values. However, this results in a lot of "chatter"
Try to remove some of the "chatter" initially by filtering out very small and very large regions
Now, fill in these rings. However, there is still some "chatter" and filled regions between cells left
Again, remove small and large regions, but since the cells are filled, increase the bounds for what is acceptable.
There are still some bad regions, most of the bad areas are going to be regions between cells. Regions between cells are detectable by observing the curvature around the boundary of the region. They "bend inwards" a lot, which is expressed mathematically as a large portion of the boundary having a negative curvature. Also, to remove the rest of the "chatter", these regions will have a large standard deviation in the curvature of their boundary, so remove boundaries with a large standard deviation as well.
Overall, the most difficult part will be removing regions between cells and the "chatter" without removing the actual cells.
Anyway, here's the code (note there are a lot of heuristics and also it's very rough and based on code from older projects, homeworks, and stackoverflow answers so it's definitely far from finished):
cell = im2double(imread('cell1.png'));
if (size(cell,3) == 3)
cell = rgb2gray(cell);
end
figure(1), subplot(3,2,1)
imshow(cell,[]);
% Detect edges
hw = 5;
cell_filt = imfilter(cell, fspecial('log',2*hw+1,1));
subplot(3,2,2)
imshow(cell_filt,[]);
% First remove hw and filter out noncell hws
mask = cell_filt > 0;
hw = 5;
mask = mask(hw:end-hw-1,hw:end-hw-1);
subplot(3,2,3)
imshow(mask,[]);
rp = regionprops(mask, 'PixelIdxList', 'Area');
rp = rp(vertcat(rp.Area) > 50 & vertcat(rp.Area) < 2000);
mask(:) = false;
mask(vertcat(rp.PixelIdxList)) = true;
subplot(3,2,4)
imshow(mask,[]);
% Now fill objects
mask1 = true(size(mask) + hw);
mask1(hw+1:end, hw+1:end) = mask;
mask1 = imfill(mask1,'holes');
mask1 = mask1(hw+1:end, hw+1:end);
mask2 = true(size(mask) + hw);
mask2(hw+1:end, 1:end-hw) = mask;
mask2 = imfill(mask2,'holes');
mask2 = mask2(hw+1:end, 1:end-hw);
mask3 = true(size(mask) + hw);
mask3(1:end-hw, 1:end-hw) = mask;
mask3 = imfill(mask3,'holes');
mask3 = mask3(1:end-hw, 1:end-hw);
mask4 = true(size(mask) + hw);
mask4(1:end-hw, hw+1:end) = mask;
mask4 = imfill(mask4,'holes');
mask4 = mask4(1:end-hw, hw+1:end);
mask = mask1 | mask2 | mask3 | mask4;
% Filter out large and small regions again
rp = regionprops(mask, 'PixelIdxList', 'Area');
rp = rp(vertcat(rp.Area) > 100 & vertcat(rp.Area) < 5000);
mask(:) = false;
mask(vertcat(rp.PixelIdxList)) = true;
subplot(3,2,5)
imshow(mask);
% Filter out regions with lots of positive concavity
% Get boundaries
[B,L] = bwboundaries(mask);
% Cycle over boundarys
for i = 1:length(B)
b = B{i};
% Filter boundary - use circular convolution
b(:,1) = cconv(b(:,1),fspecial('gaussian',[1 7],1)',size(b,1));
b(:,2) = cconv(b(:,2),fspecial('gaussian',[1 7],1)',size(b,1));
% Find curvature
curv_vec = zeros(size(b,1),1);
for j = 1:size(b,1)
p_b = b(mod(j-2,size(b,1))+1,:); % p_b = point before
p_m = b(mod(j,size(b,1))+1,:); % p_m = point middle
p_a = b(mod(j+2,size(b,1))+1,:); % p_a = point after
dx_ds = p_a(1)-p_m(1); % First derivative
dy_ds = p_a(2)-p_m(2); % First derivative
ddx_ds = p_a(1)-2*p_m(1)+p_b(1); % Second derivative
ddy_ds = p_a(2)-2*p_m(2)+p_b(2); % Second derivative
curv_vec(j+1) = dx_ds*ddy_ds-dy_ds*ddx_ds;
end
if (sum(curv_vec > 0)/length(curv_vec) > 0.4 || std(curv_vec) > 2.0)
L(L == i) = 0;
end
end
mask = L ~= 0;
subplot(3,2,6)
imshow(mask,[])
Output1:
Output2:

Resources