python cv2 feature matching give different results - opencv3.0

When I match SIFT feature using FLANN,I found same input descriptor give different match pairs in same process.
python code:
import cv2
def match(des_q, des_t):
FLANN_INDEX_KDTREE = 1
ratio = 0.7
index_params = dict(algorithm=FLANN_INDEX_KDTREE, trees=5)
search_params = dict(checks=50)
flann1 = cv2.FlannBasedMatcher(index_params, search_params)
two_nn = flann1.knnMatch(des_q, des_t, k=2)
matches = [(first.queryIdx, first.trainIdx) for first, second in two_nn
if first.distance < ratio * second.distance]
print(matches)
return matches
def img_sim(img1, img2):
img1 = cv2.cvtColor(img1, cv2.IMREAD_GRAYSCALE)
img2 = cv2.cvtColor(img2, cv2.IMREAD_GRAYSCALE)
sift = cv2.xfeatures2d.SIFT_create()
eps = 1e-7
# find the keypoints and descriptors with SIFT
kp1, des1 = sift.detectAndCompute(img1, None)
des1 /= (des1.sum(axis=1, keepdims=True) + eps)
des1 = np.sqrt(des1)
kp2, des2 = sift.detectAndCompute(img2, None)
des2 /= (des2.sum(axis=1, keepdims=True) + eps)
des2 = np.sqrt(des2)
# test images,same input(des1,des2),different output?
matches1 = match(des1, des2)
matches2 = match(des1, des2)
img1 = '' # query image
img2 = '' # index image
img1 = cv2.cvtColor(cv2.imread(img1), cv2.COLOR_BGR2RGB)
img2 = cv2.cvtColor(cv2.imread(img2), cv2.COLOR_BGR2RGB)
img_sim(img1, img2)
I found the matches in line 11 is different although my input descriptor(des1,des2) is same,I guess the reason is kd-tree cache,but I can't solve it.Any one could help me?
I want the matches result is always same. My cv2 version is 3.4.0, Thanks in advance.

I'm not an expert, but I have an assumption:
FLANN is based on a randomized k-d tree algorithm that approximates the nearest neighbor. Its aim is to find a fast approximation with acceptable declines in accuracy. As a consequence, the results may not always be the same.
For detailed informations, have a look at the papers listed at FLANN
Depending on what you are using the matches for (e.g. calculation of a homography), it could be possible to be no issue at all.

Related

Color features extraction through clustering in image searching engine

I'm trying to implement a perceptual-based image searching engine, that will allow users to find pictures, containing objects of relatively same or close colours to the user-specified template(object from the sample image).
The goal for now is not to match a precise object, but rather to find any significant areas that are close in color to the template. I am stuck with indexing my dataset.
I have tried some clustering algorithms, such as k-means from sklearn.cluster (as I've read from this article), to select centroids from the sample image as my features, that are eventually in CIELab color space to acquire more perceptual uniformity. But it doesn't seem to work well, as cluster centres are generated randomly and thus I've got poor metrics results even on an object and image, from which that same object was extracted!
As far as I'm concerned, a common algorithm in simple image searching programs is using distance between histograms, which is not acceptable as I try to sustain perceptually-valid colour difference, and by that I mean that I can only manage two separate colours (and maybe some additional values) to calculate metrics in CIELab colour space. I am using CMCl:c metric of my own implementation, and it produced good results so far.
Maybe someone can help me and recommend an algorithm more suitable for my purpose.
Some code that I've done so far:
import cv2 as cv
import numpy as np
from sklearn.cluster import KMeans, MiniBatchKMeans
from imageproc.color_metrics import *
def feature_extraction(image, features_length=6):
width, height, dimensions = tuple(image.shape)
image = cv.cvtColor(image, cv.COLOR_BGR2LAB)
image = cv.medianBlur(image, 7)
image = np.reshape(image, (width * height, dimensions))
clustering_handler = MiniBatchKMeans(n_init=40, tol=0.0, n_clusters=features_length, compute_labels=False,
max_no_improvement=10, max_iter=200, reassignment_ratio=0.01)
clustering_handler.fit(image)
features = np.array(clustering_handler.cluster_centers_, dtype=np.float64)
features[:, :1] /= 255.0
features[:, :1] *= 100.0
features[:, 1:2] -= 128.0
features[:, 2:3] -= 128.0
return features
if __name__ == '__main__':
first_image_name = object_image_name
second_image_name = image_name
sample_features = list()
reference_features = list()
for name, features in zip([first_image_name, second_image_name], [sample_features, reference_features]):
image = cv.imread(name)
features.extend(feature_extraction(image, 6))
distance_matrix = np.ndarray((6, 6))
distance_mappings = {}
for n, i in enumerate(sample_features):
for k, j in enumerate(reference_features):
distance_matrix[n][k] = calculate_cmc_distance(i, j)
distance_mappings.update({distance_matrix[n][k]: (i, j)})
minimal_distances = []
for i in distance_matrix:
minimal_distances.append(min(i))
minimal_distances = sorted(minimal_distances)
print(minimal_distances)
for ii in minimal_distances:
i, j = distance_mappings[ii]
color_plate1 = np.zeros((300, 300, 3), np.float32)
color_plate2 = np.zeros((300, 300, 3), np.float32)
color1 = cv.cvtColor(np.float32([[i]]), cv.COLOR_LAB2BGR)[0][0]
color2 = cv.cvtColor(np.float32([[j]]), cv.COLOR_LAB2BGR)[0][0]
color_plate1[:] = color1
color_plate2[:] = color2
cv.imshow("s", np.hstack((color_plate1, color_plate2)))
cv.waitKey()
print(sum(minimal_distances))
The usual approach would be to cluster only once, with a representative sample from all images.
This is a preprocessing step, to generate your "dictionary".
Then for feature extraction, you would map points to the fixed cluster centers, that are now shared across all images. This is a simple nearest-neighbor mapping, no clustering.

Find out which blob has the pixel location [x,y]?

I have a labelled blob image using bwlabel, I want to find a blob which has the pixel location [x,y] and display it by removing the rest of blobs.
Here is the code I wrote, but it doesn't give correct answer, please fix this
[y, x] = ginput(1);
x = round(x);
y = round(y); % here x and y is a location of blob i want to keep
BW = bwlabel(newImgg,4) ; % labelled image contains several blobs
% figure, imshow(BW, [])
props = regionprops(logical(BW),'all');
while(1)
for k = 2:length(props)
if ismember([x,y],props(k,1).PixelList) == [1, 1];
keeperIndex = k;
break
end
end
break
end
keeperBlobsImage = ismember(BW, keeperIndex);
keeperBlobsImage = imfill(keeperBlobsImage,'holes');
figure, imshow(keeperBlobsImage,[])
Thanks,
Gopi
I do not currently have a MATLAB license, so I wouldn't be able to test this on my machine, I've also been away from MATLAB syntax for a while. Here's an idea:
From MATLAB's documentation, PixelList is an array where each row is formatted [x,y,...], depending on your dimensions.
Working with your image I'm assuming PixelList has the format [x,y]
Looping through PixelList, keep track of the indices you want to discard. If you measured n pixels:
discardList = []
for i = 1:n
if (PixelList(i) != [target_x,target_y]
discardList=[discardList,i]
end
end
newPixelList = PixelList
newPixelList(discardList) = []
Again, I haven't used MATLAB for a decent amount of time now, so I apologize for any problems in the syntax (brackets, loops, and conditionals)
EDIT/UPDATE:
According to the MATLAB's documentation, it shows bwlabel being used only on a BW image. So make sure you're doing that, I guess.
Also, on the output of regionprops you should have WeightedCentroid.
From your ginput, find the region where the centroid is the closest.
My suggestion would be to use the vision.BlobAnalysis System Object
[y,x] = ginput(1)
bA = vision.BlobAnalysis;
centroids = step(bA,BWImage);
using the documentation make sure you turn off all "output ports" of the system object, and keep the centroid output port on.
d = 1e10;
d2 = 0;
dArr = [x,y;0,0]
cIndex=0;
for i = 1:length(centroids)
dArr(2,:) = centroids(i,:);
d2 = pdist(dArr);
if (d2<d)
d = d2;
cIndex = i;
end
end
The variable cIndex will contain the index of the blob you need. You can run blob analysis and isolate it from the rest

Image differences detection in matlab

I'm trying to find the broken ligaments for these two photos. Because the patten it got I can use the conv2 function find the general broken areas. However, it is really hard for me think how to make it tell the exact broken ligaments. Can you guys give me some idea for how to find which ligaments are broken please?
Because I'm new to this website, I can not post more photos with 2-D convolution results.
Original Picture
Broken Picture
Make a region growing algorithm inside each perfect square.
Once you get that, calculate the area of that section.
Once you find this, calculate the remaining areas. The larger values will be the broken ligaments :)
img = imread('unbroke.jpg');
level = graythresh(rgb2gray(img));
BW = im2bw(rgb2gray(img),level);
BW2= imdilate(imerode(BW, ones(5)), ones(5));
BW3 = bwmorph(BW2,'remove');
figure, imshow(BW2), hold on[![enter image description here][1]][1]
[H,T,R] = hough(BW2);
P = houghpeaks(H,15,'threshold',ceil(0.3*max(H(:))));
x = T(P(:,2)); y = R(P(:,1));
lines = houghlines(BW2,T,R,P,'FillGap',5,'MinLength',7);
max_len = 0;
for k = 1:length(lines)
xy = [lines(k).point1; lines(k).point2];
plot(xy(:,1),xy(:,2),'LineWidth',2,'Color','green');
% Plot beginnings and ends of lines
plot(xy(1,1),xy(1,2),'x','LineWidth',2,'Color','yellow');
plot(xy(2,1),xy(2,2),'x','LineWidth',2,'Color','red');
% Determine the endpoints of the longest line segment
len = norm(lines(k).point1 - lines(k).point2);
if ( len > max_len)
max_len = len;
xy_long = xy;
end
end
lines from unbroken image
lines from broken image
Now, you know what the line segments are do some matching. Or else find pairs of segments that would be connected (same slope + same x/y intercept) within a threshold.
This may be an interesting way to do this too. I saved the second image only as 'image.jpg'.
I = imread('image.jpg');
J = imbinarize(rgb2gray(I)); % Threshold to get a BW image.
BW = bwpropfilt(~J, 'Area', [35001, 1013283]);
imshow(BW)
shows
For selecting the area thresholds easily, I used https://www.mathworks.com/help/images/calculate-region-properties-using-image-region-analyzer.html
If you don't have a recent MATLAB version where imbinarize or bwpropfilt doesn't exist, you can use equivalent thresholding functions, and regionprops to extract all objects within the area range.

Read the corresponding image in matlab

I calculated the distance between an image A1 and different images like image1,image2,image3 and image4 based on its hierarchicalCentroid. dist_1_1{ii} contains 4 values. I want to find the minimum value present in the dist_1_1{ii}.But I shows the value 1 and also i want to show the image which gives minimum value. Please help me. Thanks in advance
%% demo
clc,clear all,close all
plotFlag = 1;
depth = 6;
alef1 = im2bw(imread('C1.bmp')); %% Binary image
vec1 = hierarchicalCentroid(alef1,depth,plotFlag);
% subplot(1,3,1);
A=[];
vec2=[];
dist_1_1=[];
for ii=1:4
A{ii} = imread(['image' num2str(ii) '.bmp']);
% subplot(1,3,2);
vec2{ii} = hierarchicalCentroid(A{ii},depth,plotFlag);
%subplot(1,3,3);
%vec3 = hierarchicalCentroid(tav,depth,plotFlag);
% vec4=hierarchicalCentroid(A,depth,plotFlag);
% vec5=hierarchicalCentroid(A,depth,plotFlag);
dist_1_1{ii} = sum((vec1 - vec2{ii}) .^ 2);
[~,I] = min(dist_1_1{ii});
figure;
subplot(1,2,1);imshow(alef1);
subplot(1,2,2);imshow(A{I});
end
Considering that your images are named such as image1.png, image2.png,...
first, read and store the images in a cell
for ii=1:n
A{ii} = imread(['image' num2str(ii) '.png']);
end
Then compute the similarity between the image A1 and other images:
ind = computeSimilarity(A1,A); % here you compute the similarity and
(of course you would need a for-loop.)
After you have stored the values in the ind vector:
ind = [0.76,1.96,2.96];
Then find the index of a minimum value and choose the image accordingly
[~,I] = min(ind);
figure;
subplot(1,2,1);imshow(A1);
subplot(1,2,2);imshow(A{I});
What should be corrected in your code:
First of all, avoid using cell when it is not necessary and define it correctly when you using it. You cannot define a cell array like A=[]. You should do it like this: A=cell(2,3). For instance, for storing the vector of the descriptors you do not need a cell, just store them as a matrix, as I did.
Second of all when posting your code here, remove the unnecessary parts such as commented plots and commands.
And then, try to modify your code as follows, I might made some mistake about the dimensions, but you can get the main idea.
and also remember that you do not need to check each distance inside the loop. Calculate the vectors first and then find the distances in one step, as I did.
depth = 6;
alef1 = im2bw(imread('C1.bmp'));
vec1 = hierarchicalCentroid(alef1,depth,0);
A=cell(1,4);
vMatrix=zeros(4,length(vec1));
for ii=1:4
A{1,ii} = imread(['image' num2str(ii) '.bmp']);
vecMatrix(ii,:) = hierarchicalCentroid(A{1,ii},depth,0);
end
dist = sum((repmat(vec1,4,1) - vMatrix) .^ 2,2);
[~,I] = min(dist);

Check if second image is subimage of first image

I want to find out if a given image is an exact or similar part of another image in matlab.
For example, detecting a score bar in a cricket video frame. I would like to detect if there is a scorebar displayed in the given image or not.
1. Larger image
2. Another image
3. Check if this is a subimage
I want to check if 3 is a part of 1 or not. Not an exact part. For example, even if a scorebar exists in 1, and they are not the same scorebars, that would do.
What I am trying:
I am trying to divide the larger image into small parts and take the last part of the image and calculate hue histogram difference with the scorebar image. If it falls below a certain threshold, I should classify that as a part of the bigger image. Is this the right approach or should I follow some other better approach. Please suggest me if you have a better one.
Code I wrote:
rgbImage = imread('img7517.jpg'); %bigger image
[r, c, x] = size(rgbImage);
numberOfBins = 256;
r1 = 6*r/7;
im = rgbImage(r1:r,:,1);
subplot(2,2,1);
imshow(im);
hsv = rgb2hsv(im);
h = hsv(:,:,1);
subplot(2,2,2);
hist(h(:), numberOfBins);
[counts, y] = hist(h(:), numberOfBins);
im1 = imread('scorebar.jpg'); %smaller image
subplot(2,2,3);
imshow(im1);
hsv = rgb2hsv(rgbImage);
h = hsv(:,:,1);
subplot(2,2,4);
hist(h(:), numberOfBins);
[count, y] = hist(h(:), numberOfBins);
c = sum(abs(counts(:) - count(:)));
disp(c);
Problem
But this doesn't give me any significance histogram difference between 1,3 and 2,3. Value of c for 1,3 is 72949 and for 2,3 is 72875. How do I do this? Is the problem due to code or approach? Please help me solve this problem.
Edit:
Trying normalized cross-correlation,
im1 = rgb2gray(imread('replay.jpg'));
im2 = rgb2gray(imread('scorebar1.jpg'));
c = normxcorr2(im2, im1);
[ypeak, xpeak] = find(c==max(c(:)));
yoffSet = ypeak-size(im1,1);
xoffSet = xpeak-size(im1,2);
hFig = figure;
hAx = axes;
imshow(im2,'Parent', hAx);
imrect(hAx, [xoffSet, yoffSet, size(im1,2), size(im1,1)]);
following this link. But doesn't gives a similar analysis.
This class of problem (finding a target image within a larger image) is known as template matching. Typically you might use normalised cross-correlation, but there are various different algorithms, depending on your requirements and specific use case.
Unfortunately your home-brew histogram-based algorithm is probably not going to give very good results, as you have already observed, so you'll probably need to try one of the commonly-used methods described in the articles linked to above.
Solution, I got,
im1 = rgb2gray(imread('img1.jpg'));
im2 = rgb2gray(imread('scorebar.jpg'));
[r, c, x] = size(rgbImage);
numberOfBins = 256;
r1 = 6*r/7;
im1 = im1(r1:r,:,1);
[counts, y] = imhist(im1, numberOfBins);
[count, y] = imhist(im2, numberOfBins);
c = sum(abs(counts(:) - count(:)));
disp(c);
This gives significance hue histogram difference(HHD) between histograms. The images who have scorebars have HHD from 2000-5000, those who don't have scorebars have HHD > 10000.

Resources