I am trying to calculate the pose of image Y, given image X. Image Y is the same as image X rotated 90ยบ degrees.
1 -So, for starters i find the matches between both images.
2 -Then i store all the good matches.
3 -The homography between the the matches from both images is calculated using cv2.RANSAC.
4 -Then for the X image, i transform the 2d matching points into 3d, adding 0 as the Z axis.
5 -Object points contain all points from matches of original image, while image points contain matches from the training image. Both array of points are filtered using the mask returned by homography.
6 -After that, i use cv2.calibrateCamera with these object points and image points.
7 -Finnaly i use cv2.projectPoints to get the projections of the axis
I know with that until step 5, the results are correct because i use cv2.drawMatches to see the matches. However this may not be the way to get what i want to achieve.
matches = flann.knnMatch(query_image.descriptors, descriptors, k=2)
good = []
for m, n in matches:
if m.distance < 0.70 * n.distance:
good.append(m)
current_good = good
src_pts = np.float32([selected_image.keypoints[m.queryIdx].pt for m in current_good]).reshape(-1, 1, 2)
dst_pts = np.float32([keypoints[m.trainIdx].pt for m in current_good]).reshape(-1, 1, 2)
homography, mask = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC, 5.0)
test = np.zeros(((mask.ravel() > 0).sum(), 3),np.float32) #obj points
test1 = np.zeros(((mask.ravel() > 0).sum(), 2), np.float32) #img points
i=0
counter=0
for m in current_good:
if mask.ravel()[i] == 1:
test[counter][0] = selected_image.keypoints[m.queryIdx].pt[0]
test[counter][1] = selected_image.keypoints[m.queryIdx].pt[1]
test1[counter][0] = selected_image.keypoints[m.trainIdx].pt[0]
test1[counter][1] = selected_image.keypoints[m.trainIdx].pt[1]
counter+=1
i+=1
gray = cv2.cvtColor(self.train_image, cv2.COLOR_BGR2GRAY)
gray = np.float32(gray)
#here start my doubts about what i want to do and if it is possible to do it this way
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera([test], [test1], gray.shape[::-1], None, None)
axis = np.float32([[3, 0, 0], [0, 3, 0], [0, 0, -3]]).reshape(-1, 3)
rvecs = np.array(rvecs, np.float32)
tvecs = np.array(tvecs, np.float32)
imgpts, jac = cv2.projectPoints(axis, rvecs, tvecs, mtx, dist)
However after all this, imgpts returned by cv2.projectPoints give results that don't make much sense to me, like :
[[[857.3185 109.317406]]
[[857.2196 108.360954]]
[[857.2846 107.579605]]]
I would like to have a normal to my image like it is shown here https://docs.opencv.org/trunk/d7/d53/tutorial_py_pose.html and i successfully got it to work using the chessboard image. But trying to adapt to a general image is giving me strange results.
Related
Ho can I imply in python a transformation with a centralization like imtransform in matlab (see it's exact semantics, it is acutely relevant).
For example in matlab:
for this tform:
tform = maketform('affine',[1 0 0; -1 1 0; 0 0 1]);
I get:
and in python in a big variety of affine transformation methods (piilow, opencv, skimage and e.t.c) I get it non-centralized and cut:
How can I choose my 3*3 matrix of the tform for python libraries, such that it will centralize the image after such skewing ?
MATLAB default behavior is expanding and centralizing the output image, but this behavior is unique to MATLAB.
There might be some Python equivalent that I am not aware of, but I would like to focus on OpenCV solution.
In OpenCV, you need to compute the coefficients of the transformation matrix, and compute the size of the output image in order to get the same result as in MATLAB.
Consider the following implementation details:
In OpenCV, the transformation matrix is transposed relative to MATLAB (different convention).
In Python the first index is [0, 0] and in MATLAB [1, 1].
You need to compute the dimensions (width and height) of the output image from advance.
You need the output dimensions to include the entire transformed image (all the corners of the transformed image should enter the output image).
My suggestion is transforming the four corners, and compute max_x - min_x and max_y - min_y of the transformed corners.
For centralizing the output, you need to compute the translation coefficients (last column in OpenCV matrix).
Assume: Source center is transformed (shifted) to destination center.
For computing the translation, you may use inverse transformation, and compute the translation (shift in pixels) from the source center to destination center.
Here is a Python code sample (using OpenCV):
import numpy as np
import cv2
# Read input image
src_im = cv2.imread('peppers.png')
# Build a transformation matrix (the transformation matrix is transposed relative to MATLAB)
t = np.float32([[1, -1, 0],
[0, 1, 0],
[0, 0, 1]])
# Use only first two rows (affine transformation assumes last row is [0, 0, 1])
#trans = np.float32([[1, -1, 0],
# [0, 1, 0]])
trans = t[0:2, :]
inv_t = np.linalg.inv(t)
inv_trans = inv_t[0:2, :]
# get the sizes
h, w = src_im.shape[:2]
# Transfrom the 4 corners of the input image
src_pts = np.float32([[0, 0], [w-1, 0], [0, h-1], [w-1, h-1]]) # https://stackoverflow.com/questions/44378098/trouble-getting-cv-transform-to-work (see comment).
dst_pts = cv2.transform(np.array([src_pts]), trans)[0]
min_x, max_x = np.min(dst_pts[:, 0]), np.max(dst_pts[:, 0])
min_y, max_y = np.min(dst_pts[:, 1]), np.max(dst_pts[:, 1])
# Destination matrix width and height
dst_w = int(max_x - min_x + 1) # 895
dst_h = int(max_y - min_y + 1) # 384
# Inverse transform the center of destination image, for getting the coordinate on the source image.
dst_center = np.float32([[(dst_w-1.0)/2, (dst_h-1.0)/2]])
src_projected_center = cv2.transform(np.array([dst_center]), inv_trans)[0]
# Compute the translation of the center - assume source center goes to destination center
translation = src_projected_center - np.float32([[(w-1.0)/2, (h-1.0)/2]])
# Place the translation in the third column of trans
trans[:, 2] = translation
# Transform
dst_im = cv2.warpAffine(src_im, trans, (dst_w, dst_h))
# Show dst_im as output
cv2.imshow('dst_im', dst_im)
cv2.waitKey()
cv2.destroyAllWindows()
# Store output for testing
cv2.imwrite('dst_im.png', dst_im)
MATLAB code for comparing results:
I = imread('peppers.png');
tform = maketform('affine',[1 0 0; -1 1 0; 0 0 1]);
J = imtransform(I, tform);
figure;imshow(J)
% MATLAB recommends using affine2d and imwarp instead of maketform and imtransform.
% tform = affine2d([1 0 0; -1 1 0; 0 0 1]);
% J = imwarp(I, tform);
% figure;imshow(J)
pyJ = imread('dst_im.png');
figure;imagesc(double(rgb2gray(J)) - double(rgb2gray(pyJ)));
title('MATLAB - Python Diff');impixelinfo
max_abs_diff = max(imabsdiff(J(:), pyJ(:)));
disp(['max_abs_diff = ', num2str(max_abs_diff)])
We are lucky to get zero difference - result of imwarp in MATLAB gives minor differences (but imtransform result is same as OpenCV).
Python output image (same as MATLAB output image):
I have 2 images ("before" and "after"). I would like to show a final image where the left half is taken from the before image and the right half is taken from the after image.
The images should be separated by a white diagonal line of predefined width (2 or 3 pixels), where the diagonal is specified either by a certain angle or by 2 start and end coordinates. The diagonal should overwrite a part of the final image such that the size is the same as the sources'.
Example:
I know it can be done by looping over all pixels to recombine and create the final image, but is there an efficient way, or better yet, a built-in function that can do this?
Unfortunately I don't believe there is a built-in solution to your problem, but I've developed some code to help you do this but it will unfortunately require the image processing toolbox to play nicely with the code. As mentioned in your comments, you have this already so we should be fine.
The logic behind this is relatively simple. We will assume that your before and after pictures are the same size and also share the same number of channels. The first part is to declare a blank image and we draw a straight line down the middle of a certain thickness. The intricacy behind this is to declare an image that is slightly bigger than the original size of the image. The reason why is because I'm going to draw a line down the middle, then rotate this blank image by a certain angle to achieve the first part of what you desire. I'll be using imrotate to rotate an image by any angle you desire. The first instinct is to declare an image that's the same size as either the originals, draw a line down the middle and rotate it. However, if you do this you'll end up with the line being disconnected and not draw from the top to the bottom of the image. That makes sense because the line being drawn on an angle covers more pixels than if you were to draw this vertically.
Using Pythagorean's theorem, we know that the longest line that can ever be drawn on your image is the diagonal. Therefore we declare an image that is sqrt(rows*rows + cols*cols) in both the rows and columns where rows and cols are the rows and columns of the original image. After, we'll take the ceiling to make sure we've covered as much as possible and we add a bit of extra room to accommodate for the width of the line. We draw a line on this image, rotate it then we'll crop the image after so that it's the same size as the input images. This ensures that the line drawn at whatever angle you wish is fully drawn from top to bottom.
That logic is the hardest part. Once you do that, you declare two logical masks where you use imfill to fill the left side of the mask as one mask and we'll invert the mask to find the other mask. You will also need to use the line image that we created earlier with imrotate to index into the masks and set the values to false so that we ignore these pixels that are on the line.
Finally, you take each mask, index into your image and copy over each portion of the image you desire. You finally use the line image to index into the output and set the values to white.
Without further ado, here's the code:
% Load some example data
load mandrill;
% im is the image before
% im2 is the image after
% Before image is a colour image
im = im2uint8(ind2rgb(X, map));
% After image is a grayscale image
im2 = rgb2gray(im);
im2 = cat(3, im2, im2, im2);
% Declare line image
rows = size(im, 1); cols = size(im, 2);
width = 5;
m = ceil(sqrt(rows*rows + cols*cols + width*width));
ln = false([m m]);
mhalf = floor(m / 2); % Find halfway point width wise and draw the line
ln(:,mhalf - floor(width/2) : mhalf + floor(width/2)) = true;
% Rotate the line image
ang = 20; % 20 degrees
lnrotate = imrotate(ln, ang, 'crop');
% Crop the image so that it's the same dimensions as the originals
mrowstart = mhalf - floor(rows/2);
mcolstart = mhalf - floor(cols/2);
lnfinal = lnrotate(mrowstart : mrowstart + rows - 1, mcolstart : mcolstart + cols - 1);
% Make the masks
mask1 = imfill(lnfinal, [1 1]);
mask2 = ~mask1;
mask1(lnfinal) = false;
mask2(lnfinal) = false;
% Make sure the masks have as many channels as the original
mask1 = repmat(mask1, [1 1 size(im,3)]);
mask2 = repmat(mask2, [1 1 size(im,3)]);
% Do the same for the line
lnfinal = repmat(lnfinal, [1 1 size(im, 3)]);
% Specify output image
out = zeros(size(im), class(im));
out(mask1) = im(mask1);
out(mask2) = im2(mask2);
out(lnfinal) = 255;
% Show the image
figure;
imshow(out);
We get:
If you want the line to go in the other direction, simply make the angle ang negative. In the example script above, I've made the angle 20 degrees counter-clockwise (i.e. positive). To reproduce the example you gave, specify -20 degrees instead. I now get this image:
Here's a solution using polygons:
function q44310306
% Load some image:
I = imread('peppers.png');
B = rgb2gray(I);
lt = I; rt = B;
% Specify the boundaries of the white line:
width = 2; % [px]
offset = 13; % [px]
sz = size(I);
wlb = [floor(sz(2)/2)-offset+[0,width]; ceil(sz(2)/2)+offset-[width,0]];
% [top-left, top-right; bottom-left, bottom-right]
% Configure two polygons:
leftPoly = struct('x',[1 wlb(1,2) wlb(2,2) 1], 'y',[1 1 sz(1) sz(1)]);
rightPoly = struct('x',[sz(2) wlb(1,1) wlb(2,1) sz(2)],'y',[1 1 sz(1) sz(1)]);
% Define a helper grid:
[XX,YY] = meshgrid(1:sz(2),1:sz(1));
rt(inpolygon(XX,YY,leftPoly.x,leftPoly.y)) = intmin('uint8');
lt(repmat(inpolygon(XX,YY,rightPoly.x,rightPoly.y),1,1,3)) = intmin('uint8');
rt(inpolygon(XX,YY,leftPoly.x,leftPoly.y) & ...
inpolygon(XX,YY,rightPoly.x,rightPoly.y)) = intmax('uint8');
final = bsxfun(#plus,lt,rt);
% Plot:
figure(); imshow(final);
The result:
One solution:
im1 = imread('peppers.png');
im2 = repmat(rgb2gray(im1),1,1,3);
imgsplitter(im1,im2,80) %imgsplitter(image1,image2,angle [0-100])
function imgsplitter(im1,im2,p)
s1 = size(im1,1); s2 = size(im1,2);
pix = floor(p*size(im1,2)/100);
val = abs(pix -(s2-pix));
dia = imresize(tril(ones(s1)),[s1 val]);
len = min(abs([0-pix,s2-pix]));
if p>50
ind = [ones(s1,len) fliplr(~dia) zeros(s1,len)];
else
ind = [ones(s1,len) dia zeros(s1,len)];
end
ind = uint8(ind);
imshow(ind.*im1+uint8(~ind).*im2)
hold on
plot([pix,s2-pix],[0,s1],'w','LineWidth',1)
end
OUTPUT:
Suppose i would like to draw an image like the following:
Where the pixel values are refined to 0 for black and white for 1.
These line are drawn with specific radius and angles
Now I create a 80 x 160 matrix
texturematrix = zeros(80,160);
then i want to change particular elements to be 1 according to the lines conditions
but how do i make them repeatedly with specific distance apart from each others effectively?
Thanks a lot everyone!
This might not be what you are looking for, but generating such an image could be done by plotting a set of lines, as follows:
% grid sizes
m = 6;
n = 5;
% line length and angle
len = 1;
theta = .1*pi;
[a,b] = meshgrid(1:m,1:n);
x = reshape([a(:),a(:)+len*cos(theta),nan(numel(a),1)]',[],1);
y = reshape([b(:),b(:)+len*sin(theta),nan(numel(b),1)]',[],1);
h = figure();
plot(x,y,'k', 'LineWidth', 2);
But this has nothing to do with a texture matrix. So, we construct a matrix of desired size:
set(gca, 'position',[0 0 1 1], 'units','normalized', 'YTick',[], 'XTick',[]);
frame = frame2im(getframe(h),[0 0 1 1]);
im = imresize(frame,[80 160]);
M = ~(im(2:end,2:end,1)==255);
usually, we can get point position in an image in this way:
figure, imshow rice.png;
[x,y,~] = ginput(1)
what returned is something like this:
x = 121
y = 100
these numbers are measured by pixels, but I'd like more accurate results like:
x = 121.35
y = 100.87
any help would be appreicated!!!
I think imagesc can be useful
% load example image
Istruct = load('gatlin');
I = Istruct.X./max(Istruct.X(:));
% display it
figure;
imagesc(I);
colormap(gray);
% get some point
[x,y]=ginput(1);
[x, y] % x and y will be double
For aligning / registering two images using control points you do need sub-pixel accuracy for the different control points.
Matlab has a very nice user interface for this purpose you might want to look at: cpselect.
A nice tutorial is also available here.
Given two images oim1 and oim2 you may use cpselect to transform oim2 to "fit" oim1:
>> [input_points, base_points] = cpselect(oim2, oim1, 'Wait', true);
>> T = cp2tform( input_points, base_points, 'similarity' ); % find similarity transformation
>> aim2 = tformarray( oim2, T, makeresampler('cubic','fill'), [2 1], [2 1], size(oim1(:,:,1)'), [], 0 );
I am plotting a 7x7 pixel 'image' in MATLAB, using the imagesc command:
imagesc(conf_matrix, [0 1]);
This represents a confusion matrix, between seven different objects. I have a thumbnail picture of each of the seven objects that I would like to use as the axes tick labels. Is there an easy way to do this?
I don't know an easy way. The axes properties XtickLabel which determines the labels, can only be strings.
If you want a not-so-easy way, you could do something in the spirit of the following non-complete (in the sense of a non-complete solution) code, creating one label:
h = imagesc(rand(7,7));
axh = gca;
figh = gcf;
xticks = get(gca,'xtick');
yticks = get(gca,'ytick');
set(gca,'XTickLabel','');
set(gca,'YTickLabel','');
pos = get(axh,'position'); % position of current axes in parent figure
pic = imread('coins.png');
x = pos(1);
y = pos(2);
dlta = (pos(3)-pos(1)) / length(xticks); % square size in units of parant figure
% create image label
lblAx = axes('parent',figh,'position',[x+dlta/4,y-dlta/2,dlta/2,dlta/2]);
imagesc(pic,'parent',lblAx)
axis(lblAx,'off')
One problem is that the label will have the same colormap of the original image.
#Itmar Katz gives a solution very close to what I want to do, which I've marked as 'accepted'. In the meantime, I made this dirty solution using subplots, which I've given here for completeness. It only works up to a certain size input matrix though, and only displays well when the figure is square.
conf_mat = randn(5);
A = imread('peppers.png');
tick_images = {A, A, A, A, A};
n = length(conf_mat) + 1;
% plotting axis labels at left and top
for i = 1:(n-1)
subplot(n, n, i + 1);
imshow(tick_images{i});
subplot(n, n, i * n + 1);
imshow(tick_images{i});
end
% generating logical array for where the confusion matrix should be
idx = 1:(n*n);
idx(1:n) = 0;
idx(mod(idx, n)==1) = 0;
% plotting the confusion matrix
subplot(n, n, find(idx~=0));
imshow(conf_mat);
axis image
colormap(gray)