Applying Homography transformation in python without using opencv - image

Given an input image and the homography matrix, I want to get an output image after the transformation.
This is the built-in function in ndimage:
im = np.array(Image.open('lena.jpg').convert('L'))
H = np.array([[1.4,0.05,-100],[0.05,1.5,-100],[0,0,1]])
im2 = ndimage.affine_transform(im, H[:2,:2], (H[0,2], H[1, 2]))
imshow(im)
imshow(im2)
For the original image I see this:
For im2 after ndimage transformation I see this:
Now I want to write a code only using python and numpy library to do this homography myself. This is the code I wrote:
left, up = 0, 0
right, down = im.shape[1], im.shape[0]
# define the homography operation
def get_point_coor(x, y, H):
input = np.array(([x], [y], [1]))
output = np.dot(H, input)
return int(output[0]), int(output[1])
# after transformation the image size might be different from the original one,
# we need to find the new size
height_max = max(get_point_coor(left, up, H)[0], get_point_coor(left, down, H)[0], get_point_coor(right, up, H)[0], get_point_coor(right, up, H)[0])
width_max = max(get_point_coor(left, up, H)[1], get_point_coor(left, down, H)[1], get_point_coor(right, up, H)[1], get_point_coor(right, up, H)[1])
height_min = min(get_point_coor(left, up, H)[0], get_point_coor(left, down, H)[0], get_point_coor(right, up, H)[0], get_point_coor(right, up, H)[0])
width_min = min(get_point_coor(left, up, H)[1], get_point_coor(left, down, H)[1], get_point_coor(right, up, H)[1], get_point_coor(right, up, H)[1])
# can ignore this 50 now. The new_height without 50 should be able to be the new boundary
# , but somehow it is not, so I add a random big number (50) for ploting.
new_height = abs(height_max) + abs(height_min)+50
new_width = abs(width_max) + abs(width_min)+50
new_image = np.zeros((new_height, new_width))
# start the main
for row in range(im.shape[0]):
for col in range(im.shape[1]):
new_row, new_col = get_point_coor(row, col, H)
new_col += abs(width_min)
new_row += abs(height_min)
new_image[new_row, new_col] = im[row][col]
imshow(new_image)
The result I get is this:
The direction, color, and size all look very different from the ndimage one. What is the correct way to implement this homography?

Sorry to say, but you are making a beginner's mistake: if you scan the source image and copy the pixels to the destination at transformed coordinates, you will get poor results: either the points will be too dense and colliding with each other, or too sparse and leave holes.
The right thing to do is to scan the destination and get the source coordinate using the inverse transformation.
As the source coordinates will not be integer in general, you can round them, or for better quality use bilinear or bicubic interpolation between the source pixels.
A second difficulty appears: as the destination domain is a general quadrilateral, you should only paint the inside pixels, and that takes a raster scan conversion of the outline. Alternatively, you can fill the bounding box of this quadrilateral and assign the background color when the source pixel is out of the bounds.

Related

Interpolating 2D position data

I have some data that was acquired from an individual that moved a cursor from one target that was presented on a screen to another straight-ahead target that was 10 centimeters away from the original start position. I have 15 movements from this person. The data I have is the instantaneous x-position x(t) and y-position y(t)of the cursor for each movement. Because the individual did not move the mouse at the exact same speed from one movement to another, the number of samples for each movement is not the same. Below, I am showing the x-positions, y-position, and complete x-y trajectory for all movements. A link to download the data is here as well. Hopefully this gives a sense for the nature of the data. Note that the cursor position always start at the [0,0] point, but don't always exactly end at the [0,10] point.
I am tasked with linearly interpolating the x-positions of the cursor onto a vector of y-positions every 0.05cm in order to align the cursor measurements across movements (that is, I must obtain x(y). I would like to present my code below and get some feedback about whether or not I am doing this correctly:
%% Try interpolating x as a function of y (i.e., transform x(t) to x(y))
%Goal is to linearly interpolated the x-positions of the cursor onto a
%vector of y-positions every 0.05cm to align the hand path measurements
%across movements
clear all;
close all;
home;
%load the data: xpos & ypos
load('pos_data.mat')
%create cell array that will hold interpolated data
x_interp = cell(size(xpos));
y_interp = cell(size(ypos));
%construct grid vector
ypos_grid = [0:0.05:10];
num_interp_samples = length(ypos_grid);
%loop through each movement
for k=1:num_movements
%get data for the current movement
xp = xpos{k};
yp = ypos{k};
%to deal with duplicate samples, I add a very small offset to each
%sample to make them all unique
offset = cumsum([0:length(yp)-1]*1e-16);
yp2 = yp+offset(:);
%interpolate xp wrt yp
x_interp{k} = interp1(yp2, xp, ypos_grid);
%interpolate yp so that it is the same size as x_interp
t = linspace(1,length(yp2),length(yp2));
ti = linspace(1,length(yp2),num_interp_samples);
y_interp{k} = interp1(t, yp2, ti);
end
I think this should be relatively simple, but when I plot the interpolated data, it looks a bit strange to me. See below:
Namely, the trajectories seem to have lost much of its "curvature", which has me worried. Note when plotting the interpolated trajectories, I am simply doing:
figure; hold on;
for k=1:num_movements
plot(x_interp{k}, y_interp{k}, 'color', 'k');
end
xlabel('x-position (cm)');
ylabel('y-position (cm)');
title('Examples of complete trajectories (interpolated data)');
axis equal;
Here are my specific questions:
(1) Am I interpolating correctly?
(2) If the answer to (1) is yes, then am I interpreting the interpolated result correctly? Specifically, why do the shapes of the trajectories appear as they doo (lacking curvature)?
(3) Am I perhaps missing a step where after obtaining x(y), I should re-transform the data back into x(t)?

Calculate 3D distance based on change in intensity

I have three sections (top, mid, bot) of grayscale images (3D). In each section, I have a point with coordinates (x,y) and intensity values [0-255]. The distance between each section is 20 pixels.
I created an illustration to show how those images were generated using a microscope:
Illustration
Illustration (side view): red line is the object of interest. Blue stars represents the dots which are visible in top, mid, bot section. The (x,y) coordinates of these dots are known. The length of the object remains the same but it can rotate in space - 'out of focus' (illustration shows a rotating line at time point 5). At time point 1, the red line is resting (in 2D image: 2 dots with a distance equal to the length of the object).
I want to estimate the x,y,z-coordinate of the end points (represents as stars) by using the changes in intensity, the knowledge about the length of the object and the information in the sections I have. Any help would be appreciated.
Here is an example of images:
Bot section
Mid section
Top section
My 3D PSF data:
https://drive.google.com/file/d/1qoyhWtLDD2fUy2zThYUgkYM3vMXxNh64/view?usp=sharing
Attempt so far:
enter image description here
I guess the correct approach would be to record three images with slightly different z-coordinates for your bot and your top frame, then do a 3D-deconvolution (using Richardson-Lucy or whatever algorithm).
However, a more simple approach would be as I have outlined in my comment. If you use the data for a publication, I strongly recommend to emphasize that this is just an estimation and to include the steps how you have done it.
I'd suggest the following procedure:
Since I do not have your PSF-data, I fake some by estimating the PSF as a 3D-Gaussiamn. Of course, this is a strong simplification, but you should be able to get the idea behind it.
First, fit a Gaussian to the PSF along z:
[xg, yg, zg] = meshgrid(-32:32, -32:32, -32:32);
rg = sqrt(xg.^2+yg.^2);
psf = exp(-(rg/8).^2) .* exp(-(zg/16).^2);
% add some noise to make it a bit more realistic
psf = psf + randn(size(psf)) * 0.05;
% view psf:
%
subplot(1,3,1);
s = slice(xg,yg,zg, psf, 0,0,[]);
title('faked PSF');
for i=1:2
s(i).EdgeColor = 'none';
end
% data along z through PSF's center
z = reshape(psf(33,33,:),[65,1]);
subplot(1,3,2);
plot(-32:32, z);
title('PSF along z');
% Fit the data
% Generate a function for a gaussian distibution plus some background
gauss_d = #(x0, sigma, bg, x)exp(-1*((x-x0)/(sigma)).^2)+bg;
ft = fit ((-32:32)', z, gauss_d, ...
'Start', [0 16 0] ... % You may find proper start points by looking at your data
);
subplot(1,3,3);
plot(-32:32, z, '.');
hold on;
plot(-32:.1:32, feval(ft, -32:.1:32), 'r-');
title('fit to z-profile');
The function that relates the intensity I to the z-coordinate is
gauss_d = #(x0, sigma, bg, x)exp(-1*((x-x0)/(sigma)).^2)+bg;
You can re-arrange this formula for x. Due to the square root, there are two possibilities:
% now make a function that returns the z-coordinate from the intensity
% value:
zfromI = #(I)ft.sigma * sqrt(-1*log(I-ft.bg))+ft.x0;
zfromI2= #(I)ft.sigma * -sqrt(-1*log(I-ft.bg))+ft.x0;
Note that the PSF I have faked is normalized to have one as its maximum value. If your PSF data is not normalized, you can divide the data by its maximum.
Now, you can use zfromI or zfromI2 to get the z-coordinate for your intensity. Again, I should be normalized, that is the fraction of the intensity to the intensity of your reference spot:
zfromI(.7)
ans =
9.5469
>> zfromI2(.7)
ans =
-9.4644
Note that due to the random noise I have added, your results might look slightly different.

Create mask from bwtraceboundary in Matlab

I'm trying to create a mask (or similar result) in order to erase pieces of a binary image that are not attached to the object surrounded by the boundary. I saw this thread (http://www.mathworks.com/matlabcentral/answers/120579-converting-boundary-to-mask) to do this from bwboundaries, but I'm having trouble making suitable changes to it. My goal is to use this code to isolate the part of this topography map that is connected, and get rid of the extra pieces. I need to retain the structure inside of the bounded area, as I was then going to use bwboundaries to create additional boundary lines of the main object's "interior" structure.
The following is my code to first create the single boundary line by searching for the bottom left pixel of the black area to begin the trace. It just looks for the first column of the image that isn't completely white and selects the last black pixel. The second section was then to create the inner boundary lines. Note that I am attempting this two step process, but if there is a way to do it with only one I'd like to hear that solution as well. Ultimately I just want boundaries for the main, large black area and the holes inside of it, while getting rid of the extra pieces hanging around.
figName='Images/BookTrace_1';
BW = imread([figName,'.png']);
BW=im2bw(BW);
imshow(BW,[]);
for j=1:size(BW,2)
if sum(BW(:,j))~=sum(BW(:,1))
corner=BW(:,j);
c=j-1;
break
end
end
r=find(corner==0);
r=r(end);
outline = bwtraceboundary(BW,[r c],'W',8,Inf,'counterclockwise');
hold on;
plot(outline(:,2),outline(:,1),'g','LineWidth',2);
[B,L] = bwboundaries(BW);
hold on
for k = 1:length(B)
boundary = B{k};
plot(boundary(:,2), boundary(:,1), 'g', 'LineWidth', 2)
end
Any suggestions or tips are greatly appreciated. If there are questions, please let me know and I'll update the post. Thank you!
EDIT: For clarification, my end goal is as in the below image. I need to trace all of the outer and inner boundaries attached to the main object, while eliminating any spare small pieces that are not attached to it.
It's very simple. I actually wouldn't use the code above and use the image processing toolbox instead. There's a built-in function to remove any white pixels that touch the border of the image. Use the imclearborder function.
The function will return a new binary image where any pixels that were touching the borders of the image will be removed. Given your code, it's very simply:
out = imclearborder(BW);
Using the above image as an example, I'm going to threshold it so that the green lines are removed... or rather merged with the other white pixels, and I'll call the above function:
BW = imread('http://i.stack.imgur.com/jhLOw.png'); %// Read from StackOverflow
BW = im2bw(BW); %// Convert to binary
out = imclearborder(BW); %// Remove pixels along border
imshow(out); %// Show image
We get:
If you want the opposite effect, where you want to retain the boundaries and remove everything else inside, simply create a new image by copying the original one and use the output from the above to null these pixel locations.
out2 = BW; %// Make copy
out2(out) = 0; %// Set pixels not belonging to boundary to 0
imshow(out2); %// Show image
We thus get:
Edit
Given the above desired output, I believe I know what you want now. You wish to fill in the holes for each group of pixels and trace along the boundary of the desired result. The fact that we have this split up into two categories is going to be useful. For those objects that are in the interior, use the imfill function and specify the holes option to fill in any of the black holes so that they're white. For the objects that exterior, this will need a bit of work. What I would do is invert the image so that pixels that are black become white and vice-versa, then use the bwareaopen function to clear away any pixels whose area is below a certain amount. This will remove those small isolated black regions that are along the border of the exterior regions. Once you're done, re-invert the image. The effect of this is that the small holes will be eliminated. I chose a threshold of 500 pixels for the area... seems to work well.
Therefore, using the above variables as reference, do this:
%// Fill holes for both regions separately
out_fill = imfill(out, 'holes');
out2_fill = ~bwareaopen(~out2, 500);
%// Merge together
final_out = out_fill | out2_fill;
This is what we get:
If you want a nice green border like in your example to illustrate this point, you can do this:
perim = bwperim(final_out);
red = final_out;
green = final_out;
blue = final_out;
red(perim) = 0;
blue(perim) = 0;
out_colour = 255*uint8(cat(3, red, green, blue));
imshow(out_colour);
The above code finds the perimeter of the objects, then we create a new image where the red and blue channels along the perimeter are set to 0, while setting the green channel to 255.
We get this:
You can ignore the green pixel border that surrounds the image. That's just a side effect with the way I'm finding the perimeter along the objects in the image. In fact, the image you supplied to me had a white pixel border that surrounds the whole region, so I'm not sure if that's intended or if that's part of the whole grand scheme of things.
To consolidate into a working example so that you can copy and paste into MATLAB, here's all of the code in one code block:
%// Pre-processing
BW = imread('http://i.stack.imgur.com/jhLOw.png'); %// Read from StackOverflow
BW = im2bw(BW); %// Convert to binary
out = imclearborder(BW); %// Remove pixels along border
%// Obtain pixels that are along border
out2 = BW; %// Make copy
out2(out) = 0; %// Set pixels not belonging to boundary to 0
%// Fill holes for both regions separately
out_fill = imfill(out, 'holes');
out2_fill = ~bwareaopen(~out2, 500);
%// Merge together
final_out = out_fill | out2_fill;
%// Show final output
figure;
imshow(final_out);
%// Bonus - Show perimeter of output in green
perim = bwperim(final_out);
red = final_out;
green = final_out;
blue = final_out;
red(perim) = 0;
blue(perim) = 0;
out_colour = 255*uint8(cat(3, red, green, blue));
figure;
imshow(out_colour);

Counting the squama of lizards

A biologist friend of mine asked me if I could help him make a program to count the squama (is this the right translation?) of lizards.
He sent me some images and I tried some things on Matlab. For some images it's much harder than other, for example when there are darker(black) regions. At least with my method. I'm sure I can get some useful help here. How should I improve this? Have I taken the right approach?
These are some of the images.
I got the best results by following Image Processing and Counting using MATLAB. It's basically turning the image into Black and white and then threshold it. But I did add a bit of erosion.
Here's the code:
img0=imread('C:...\pic.png');
img1=rgb2gray(img0);
%The output image BW replaces all pixels in the input image with luminance greater than level with the value 1 (white) and replaces all other pixels with the value 0 (black). Specify level in the range [0,1].
img2=im2bw(img1,0.65);%(img1,graythresh(img1));
imshow(img2)
figure;
%erode
se = strel('line',6,0);
img2 = imerode(img2,se);
se = strel('line',6,90);
img2 = imerode(img2,se);
imshow(img2)
figure;
imshow(img1, 'InitialMag', 'fit')
% Make a truecolor all-green image. I use this later to overlay it on top of the original image to show which elements were counted (with green)
green = cat(3, zeros(size(img1)),ones(size(img1)), zeros(size(img1)));
hold on
h = imshow(green);
hold off
%counts the elements now defined by black spots on the image
[B,L,N,A] = bwboundaries(img2);
%imshow(img2); hold on;
set(h, 'AlphaData', img2)
text(10,10,strcat('\color{green}Objects Found:',num2str(length(B))))
figure;
%this produces a new image showing each counted element and its count id on top of it.
imshow(img2); hold on;
colors=['b' 'g' 'r' 'c' 'm' 'y'];
for k=1:length(B),
boundary = B{k};
cidx = mod(k,length(colors))+1;
plot(boundary(:,2), boundary(:,1), colors(cidx),'LineWidth',2);
%randomize text position for better visibility
rndRow = ceil(length(boundary)/(mod(rand*k,7)+1));
col = boundary(rndRow,2); row = boundary(rndRow,1);
h = text(col+1, row-1, num2str(L(row,col)));
set(h,'Color',colors(cidx),'FontSize',14,'FontWeight','bold');
end
figure;
spy(A);
And these are some of the results. One the top-left corner you can see how many were counted.
Also, I think it's useful to have the counted elements marked in green so at least the user can know which ones have to be counted manually.
There is one route you should consider: watershed segmentation. Here is a quick and dirty example with your first image (it assumes you have the IP toolbox):
raw=rgb2gray(imread('lCeL8.jpg'));
Icomp = imcomplement(raw);
I3 = imhmin(Icomp,20);
L = watershed(I3);
%%
imagesc(L);
axis image
Result shown with a colormap:
You can then count the cells as follows:
count = numel(unique(L));
One of the advantages is that it can be directly fed to regionprops and give you all the nice details about the individual 'squama':
r=regionprops(L, 'All');
imshow(raw);
for k=2:numel(r)
if r(k).Area>100 % I chose 100 to filter out the objects with a small are.
rectangle('Position',r(k).BoundingBox, 'LineWidth',1, 'EdgeColor','b', 'Curvature', [1 1]);
end
end
Which you could use to monitor over/under segmentation:
Note: special thanks to #jucestain for helping with the proper access to the fields in the r structure here

How can I "plot" an image on top of another image with a different colormap?

I've got two images, one 100x100 that I want to plot in grayscale and one 20x20 that I want to plot using another colormap. The latter should be superimposed on the former.
This is my current attempt:
A = randn(100);
B = ones(20);
imagesc(A);
colormap(gray);
hold on;
imagesc(B);
colormap(jet);
There are a couple of problems with this:
I can't change the offset of the smaller image. (They always share the upper-left pixel.)
They have the same colormap. (The second colormap changes the color of all pixels.)
The pixel values are normalised over the composite image, so that the first image changes if the second image introduces new extreme values. The scalings for the two images should be separate.
How can I fix this?
I want an effect similar to this, except that my coloured overlay is rectangular and not wibbly:
Just change it so that you pass in a full and proper color matrix for A (i.e. 100x100x3 matrix), rather than letting it decide:
A = rand(100); % Using rand not randn because image doesn't like numbers > 1
A = repmat(A, [1, 1, 3]);
B = rand(20); % Changed to rand to illustrate effect of colormap
imagesc(A);
hold on;
Bimg = imagesc(B);
colormap jet;
To set the position of B's image within its parent axes, you can use its XData and YData properties, which are both set to [1 20] when this code has completed. The first number specifies the coordinate of the leftmost/uppermost point in the image, and the second number the coordinate of the rightmost/lowest point in the image. It will stretch the image if it doesn't match the original size.
Example:
xpos = get(Bimg, 'XData');
xpos = xpos + 20; % shift right a bit
set(Bimg, 'XData', xpos);

Resources