Make images overlap, despite being translated - image

I will have two images.
They will be either the same or almost the same.
But sometimes either of the images may have been moved by a few pixels on either axis.
What would be the best way to detect if there is such a move going on?
Or better still, what would be the best way to manipulate the images so that they fix for this unwanted movement?

If the images are really nearly identical, and are simply translated (i.e. not skewed, rotated, scaled, etc), you could try using cross-correlation.
When you cross-correlate an image with itself (this is the auto-correlation), the maximum value will be at the center of the resulting matrix. If you shift the image vertically or horizontally and then cross-correlate with the original image the position of the maximum value will shift accordingly. By measuring the shift in the position of the maximum value, relative to the expected position, you can determine how far an image has been translated vertically and horizontally.
Here's a toy example in python. Start by importing some stuff, generating a test image, and examining the auto-correlation:
import numpy as np
from scipy.signal import correlate2d
# generate a test image
num_rows, num_cols = 40, 60
image = np.random.random((num_rows, num_cols))
# get the auto-correlation
correlated = correlate2d(image, image, mode='full')
# get the coordinates of the maximum value
max_coords = np.unravel_index(correlated.argmax(), correlated.shape)
This produces coordinates max_coords = (39, 59). Now to test the approach, shift the image to the right one column, add some random values on the left, and find the max value in the cross-correlation again:
image_translated = np.concatenate(
(np.random.random((image.shape[0], 1)), image[:, :-1]),
axis=1)
correlated = correlate2d(image_translated, image, mode='full')
new_max_coords = np.unravel_index(correlated.argmax(), correlated.shape)
This gives new_max_coords = (39, 60), correctly indicating the image is offset horizontally by 1 (because np.array(new_max_coords) - np.array(max_coords) is [0, 1]). Using this information you can shift images to compensate for translation.
Note that, should you decide to go this way, you may have a lot of kinks to work out. Off-by-one errors abound when determining, given the dimensions of an image, where the max coordinate 'should' be following correlation (i.e. to avoid computing the auto-correlation and determining these coordinates empirically), especially if the images have an even number of rows/columns. In the example above, the center is just [num_rows-1, num_cols-1] but I'm not sure if that's a safe assumption more generally.
But for many cases -- especially those with images that are almost exactly the same and only translated -- this approach should work quite well.

Related

Draw a curve in the segmented image using matlab

I have a segmented image as shown here
i want to fit a curve along the top pixels of the segmented image(show as red curve) and i want to find the top point along the curve show in blue. I have already worked on basic idea like traversing through the top to bottom and collecting the top point along each column. i want to know is there any easy solution for this problem like directly taking out the boundary pixels and find the top point.I am using MATLAB for this problem
%download the image
img = logical(imread('http://i.stack.imgur.com/or2iX.png'));
%for some reason it appeared RGB with big solid borders.
%to monochrome
img = img(:,:,1);
%remove borders
img = img(~all(img,2), ~all(img,1));
%split into columns
cimg = num2cell(img,1);
%find first nonzero element per column
ridx = cellfun(#(x) find(x,1,'first'), cimg);
figure, imshow(img)
hold on
%image dim1 is Y, dim2 is X
plot(1:size(img,2),ridx-1,'r','linewidth',2)
%find top point
[yval, xval] = min(ridx);
If you want a smoother curve, try polyfit/polyval
#EDIT
If we want the line to have break at gaps between connected components, we should change the code to something like
bord_idx = sub2ind(size(img), ridx, 1:size(img,2));
regs=regionprops(bwlabel(img),'pixelidxlist');
regs_idx = struct2cell(regs);
split_step = cellfun(#(x) sum(ismember(bord_idx,x)), regs_idx);
split_step = split_step(split_step>0);
split_yvals = mat2cell(ridx',split_val);
split_xvals = mat2cell([1:size(img,2)]',split_val);
figure, imshow(img)
hold on
for k = 1:length(split_step),
plot(split_xvals{k}, split_yvals{k}, 'r', 'linewidth', 2),
end
However, the result is not ideal if one region is positioned over the other. If the "shadowed" points are needed, you should try bwtraceboundary or convexhull and find where the border turns down
As far as "simplest matlab solution" by which I think you mean built in matlab functions: imclose()->edge()->bwboundaries()->findpeaks()'on each boundary'->'filter results based on width and magnitude of peaks'. *you will need to tune all the parameters in these functions, I am just listing what would get you there if appropriately applied.
As far as processing speed is concerned, I think I would have done exactly what you did, basically collecting the top edge from a top down column search and then looking for the point of highest inflection. As soon as you start doing processing of any type, you start doing several operations per pixel which will quickly become more expensive than your initial search (just requires that your image and target are simple enough)
That being said, here are some ideas that may help:
1:If you run a sufficiently heavy closing (dilate->erode), that should fill in all that garbage at the bottom.
2: If you know that your point of interest is not at left or right of picture (boundaries), you could take the right and left edge points and calculate a slope to be applied as an offset to flatten the whole image.
3: If your image always has the large dark linear region below the peak as seen here, you could locate those edges with houghlines looking for verticals and then search only the columns between them.
4: If speed is a concern, you could do a more sophisticated search pattern than left to right, as your peak has a pretty good distribution around it which could help with faster localization of maxima.

matlab find peak images

I have a binary image below:
it's an image of random abstract picture, and by using matlab, what I wanna do is to detect, how many peaks does it have so I'll know that there are roughly 5 objects in it.
As you can see, there are, 5 peaks in it, so it means there are 5 objects in it.
I've tried using imregionalmax(), but I don't find it usefull, since my image already in binary image. I also tried to use regionprops('Area'), but it shows wrong number since there is no exact whitespace between each object. Thanks in advance
An easy way to do this would be to simply sum across the rows for each column and find the peaks of the result using findpeaks. In the example below, I have opted to use the inverse of the image which will result in positive peaks where the columns are.
rowSum = sum(1 - image, 1);
If we plot this, it looks like the bottom plot
We can then use findpeaks to identify the peaks in this plot. We will apply a 5-point moving average to it to help eliminate false peaks.
[peaks, locations, widths, prominences] = findpeaks(smooth(rowSum));
You can then select the "true" peaks by thresholding based on any of these outputs. For this example we can use prominences and find the more prominent peaks.
isPeak = prominences > 50;
nPeaks = sum(isPeak)
5
Then we can plot the peaks locations to confirm
plot(locations(isPeak), peaks(isPeak), 'r*');
If you have some prior knowledge about the expected widths of the peaks, you could adjust the smooth span to match this expected width and obtain some cleaner peaks when using findpeaks.
Using an expected width of 40 for your image, findpeaks was able to perfectly detect all 5 peaks with no false positive.
findpeaks(smooth(rowSum, 40));
As your they are peaks, they are vertical structures. So in this particular case, you case use projection histograms (also know as histogram projection function): you make all the black pixels fall as if they were effected by gravity. Then you will find a curve of black pixels on the bottom of your image. Then you can count the number of peaks.
Here is the algorithm:
Invert the image (black is normally the absence of information)
Histogram projection
Closing and opening in order to clean the signal and get the final result.
You can add a maxima detection to get the top of the peaks.

How to average multiple images using Octave and matrix manipulation to reduce noise?

UPDATE
Here is my code that is meant to add up the two matrices and using element by element addition and then divide by two.
function [ finish ] = stackAndMeanImage (initFrame, finalFrame)
cd 'C:\Users\Disc-1119\Desktop\Internships\Tracking\Octave\highway\highway (6-13-2014 11-13-41 AM)';
pkg load image;
i = initFrame;
f = finalFrame;
astr = num2str(i);
tmp = imread(astr, 'jpg');
d = f - i
for a = 1:d
a
astr = num2str(i + 1);
read_tmp = imread(astr, 'jpg');
read_tmp = rgb2gray(read_tmp);
tmp = tmp :+ read_tmp;
tmp = tmp / 2;
end
imwrite(tmp, 'meanimage.JPG');
finish = 'done';
end
Here are two example input images
http://imgur.com/5DR1ccS,AWBEI0d#1
And here is one output image
http://imgur.com/aX6b0kj
I am really confused as to what is happening. I have not implemented what the other answers have said yet though.
OLD
I am working on an image processing project where I am now manually choosing images that are 'empty' or only have the background, so that my algorithm can compute the differences and then do some more analysis, I have a simple piece of code that computes the mean of the two images, which I have converted to grayscale matrices, but this only works for two images, because when I find the mean of two, then take this mean and find the mean of this versus the next image, and do this repeatedly, I end up with a washed out white image that is absolutely useless. You can't even see anything.
I found that there is a function in Matlab called imFuse that is able to average images. I was wondering if anyone knew the process that imFuse uses to combine images, I am happy to implement this into Octave, or if anyone knew of or has already written a piece of code that achieves something similiar to this. Again, I am not asking for anyone to write code for me, just wondering what the process for this is and if there are already pre-existing functions out there, which I have not found after my research.
Thanks,
AeroVTP
You should not end up with a washed-out image. Instead, you should end up with an image, which is technically speaking temporally low-pass filtered. What this means is that half of the information content is form the last image, one quarter from the second last image, one eight from the third last image, etc.
Actually, the effect in a moving image is similar to a display with slow response time.
If you are ending up with a white image, you are doing something wrong. nkjt's guess of type challenges is a good one. Another possibility is that you have forgotten to divide by two after summing the two images.
One more thing... If you are doing linear operations (such as averaging) on images, your image intensity scale should be linear. If you just use the RGB values or some grayscale values simply calculated from them, you may get bitten by the nonlinearity of the image. This property is called the gamma correction. (Admittedly, most image processing programs just ignore the problem, as it is not always a big challenge.)
As your project calculates differences of images, you should take this into account. I suggest using linearised floating point values. Unfortunately, the linearisation depends on the source of your image data.
On the other hand, averaging often the most efficient way of reducing noise. So, there you are in the right track assuming the images are similar enough.
However, after having a look at your images, it seems that you may actually want to do something else than to average the image. If I understand your intention correctly, you would like to get rid of the cars in your road cam to give you just the carless background which you could then subtract from the image to get the cars.
If that is what you want to do, you should consider using a median filter instead of averaging. What this means is that you take for example 11 consecutive frames. Then for each pixel you have 11 different values. Now you order (sort) these values and take the middle (6th) one as the background pixel value.
If your road is empty most of the time (at least 6 frames of 11), then the 6th sample will represent the road regardless of the colour of the cars passing your camera.
If you have an empty road, the result from the median filtering is close to averaging. (Averaging is better with Gaussian white noise, but the difference is not very big.) But your averaging will be affected by white or black cars, whereas median filtering is not.
The problem with median filtering is that it is computationally intensive. I am very sorry I speak very broken and ancient Octave, so I cannot give you any useful code. In MatLab or PyLab you would stack, say, 11 images to a M x N x 11 array, and then use a single median command along the depth axis. (When I say intensive, I do not mean it couldn't be done in real time with your data. It can, but it is much more complicated than averaging.)
If you have really a lot of traffic, the road is visible behind the cars less than half of the time. Then the median trick will fail. You will need to take more samples and then find the most typical value, because it is likely to be the road (unless all cars have similar colours). There it will help a lot to use the colour image, as cars look more different from each other in RGB or HSV than in grayscale.
Unfortunately, if you need to resort to this type of processing, the path is slightly slippery and rocky. Average is very easy and fast, median is easy (but not that fast), but then things tend to get rather complicated.
Another BTW came into my mind. If you want to have a rolling average, there is a very simple and effective way to calculate it with an arbitrary length (arbitrary number of frames to average):
# N is the number of images to average
# P[i] are the input frames
# S is a sum accumulator (sum of N frames)
# calculate the sum of the first N frames
S <- 0
I <- 0
while I < N
S <- S + P[I]
I <- I + 1
# save_img() saves an averaged image
while there are images to process
save_img(S / N)
S <- -P[I-N] + S + P[I]
I <- I + 1
Of course, you'll probably want to use for-loops, and += and -= operators, but still the idea is there. For each frame you only need one subtraction, one addition, and one division by a constant (which can be modified into a multiplication or even a bitwise shift in some cases if you are in a hurry).
I may have misunderstood your problem but I think what you're trying to do is the following. Basically, read all images into a matrix and then use mean(). This is providing that you are able to put them all in memory.
function [finish] = stackAndMeanImage (ini_frame, final_frame)
pkg load image;
dir_path = 'C:\Users\Disc-1119\Desktop\Internships\Tracking\Octave\highway\highway (6-13-2014 11-13-41 AM)';
imgs = cell (1, 1, d);
## read all images into a cell array
current_frame = ini_frame;
for n = 1:(final_frame - ini_frame)
fname = fullfile (dir_path, sprintf ("%i", current_frame++));
imgs{n} = rgb2gray (imread (fname, "jpg"));
endfor
## create 3D matrix out of all frames and calculate mean across 3rd dimension
imgs = cell2mat (imgs);
avg = mean (imgs, 3);
## mean returns double precision so we cast it back to uint8 after
## rescaling it to range [0 1]. This assumes that images were all
## originally uint8, but since they are jpgs, that's a safe assumption
avg = im2uint8 (avg ./255);
imwrite (avg, fullfile (dir_path, "meanimage.jpg"));
finish = "done";
endfunction

Detect black dots from color background

My short question
How to detect the black dots in the following images? (I paste only one test image to make the question look compact. More images can be found →here←).
My long question
As shown above, the background color is roughly blue, and the dots color is "black". If pick one black pixel and measure its color in RGB, the value can be (0, 44, 65) or (14, 69, 89).... Therefore, we cannot set a range to tell the pixel is part of the black dot or the background.
I test 10 images of different colors, but I hope I can find a method to detect the black dots from more complicated background which may be made up of three or more colors, as long as human eyes can identify the black dots easily. Some extremely small or blur dots can be omitted.
Previous work
Last month, I have asked a similar question at stackoverflow, but have not got a perfect solution, some excellent answers though. Find more details about my work if you are interested.
Here are the methods I have tried:
Converting to grayscale or the brightness of image. The difficulty is that I can not find an adaptive threshold to do binarization. Obviously, turning a color image to grayscale or using the brightness (HSV) will lose much useful information. Otsu algorithm which calculates adaptive threshold can not work either.
Calculating RGB histogram. In my last question, natan's method is to estimate the black color by histogram. It is time-saving, but the adaptive threshold is also a problem.
Clustering. I have tried k-means clustering and found it quite effective for the background that only has one color. The shortage (see my own answer) is I need to set the number of clustering center in advance but I don't know how the background will be. What's more, it is too slow! My application is for real time capturing on iPhone and now it can process 7~8 frames per second using k-means (20 FPS is good I think).
Summary
I think not only similar colors but also adjacent pixels should be "clustered" or "merged" in order to extract the black dots. Please guide me a proper way to solve my problem. Any advice or algorithm will be appreciated. There is no free lunch but I hope a better trade-off between cost and accuracy.
I was able to get some pretty nice first pass results by converting to HSV color space with rgb2hsv, then using the Image Processing Toolbox functions imopen and imregionalmin on the value channel:
rgb = imread('6abIc.jpg');
hsv = rgb2hsv(rgb);
openimg = imopen(hsv(:, :, 3), strel('disk', 11));
mask = imregionalmin(openimg);
imshow(rgb);
hold on;
[r, c] = find(mask);
plot(c, r, 'r.');
And the resulting images (for the image in the question and one chosen from your link):
You can see a few false positives and missed dots, as well as some dots that are labeled with multiple points, but a few refinements (such as modifying the structure element used in the opening step) could clean these up some.
I was curios to test with my old 2d peak finder code on the images without any threshold or any color considerations, really crude don't you think?
im0=imread('Snap10.jpg');
im=(abs(255-im0));
d=rgb2gray(im);
filter=fspecial('gaussian',16,3.5);
p=FastPeakFind(d,0,filter);
imagesc(im0); hold on
plot(p(1:2:end),p(2:2:end),'r.')
The code I'm using is a simple 2D local maxima finder, there are some false positives, but all in all this captures most of the points with no duplication. The filter I was using was a 2d gaussian of width and std similar to a typical blob (the best would have been to get a matched filter for your problem).
A more sophisticated version that does treat the colors (rgb2hsv?) could improve this further...
Here is an extraodinarily simplified version, that can be extended to be full RGB, and it also does not use the image procesing library. Basically you can do 2-D convolution with a filter image (which is an example of the dot you are looking for), and from the points where the convolution returns the highest values, are the best matches for the dots. You can then of course threshold that. Here is a simple binary image example of just that.
%creating a dummy image with a bunch of small white crosses
im = zeros(100,100);
numPoints = 10;
% randomly chose the location to put those crosses
points = randperm(numel(im));
% keep only certain number of points
points = points(1:numPoints);
% get the row and columns (x,y)
[xVals,yVals] = ind2sub(size(im),points);
for ii = 1:numel(points)
x = xVals(ii);
y = yVals(ii);
try
% create the crosses, try statement is here to prevent index out of bounds
% not necessarily the best practice but whatever, it is only for demonstration
im(x,y) = 1;
im(x+1,y) = 1;
im(x-1,y) = 1;
im(x,y+1) = 1;
im(x,y-1) = 1;
catch err
end
end
% display the randomly generated image
imshow(im)
% create a simple cross filter
filter = [0,1,0;1,1,1;0,1,0];
figure; imshow(filter)
% perform convolution of the random image with the cross template
result = conv2(im,filter,'same');
% get the number of white pixels in filter
filSum = sum(filter(:));
% look for all points in the convolution results that matched identically to the filter
matches = find(result == filSum);
%validate all points found
sort(matches(:)) == sort(points(:))
% get x and y coordinate matches
[xMatch,yMatch] = ind2sub(size(im),matches);
I would highly suggest looking at the conv2 documentation on MATLAB's website.

resample an image from pixel to millimiters

I have an image (logical values), like this
I need to get this image resampled from pixel to mm or cm; this is the code I use to get the resampling:
function [ Ires ] = imresample3( I, pixDim )
[r,c]=size(I);
x=1:1:c;
y=1:1:r;
[X,Y]=meshgrid(x,y);
rn=r*pixDim;
cn=c*pixDim;
xNew=1:pixDim:cn;
yNew=1:pixDim:rn;
[Xnew,Ynew]=meshgrid(xNew,yNew);
Id=double(I);
Ires=interp2(X,Y,Id,Xnew,Ynew);
end
What I get is a black image. I suspect that this code does something that is not what I have in mind: it seems to take only the upper-left part of the image.
What I want is, instead, to have the same image on a mm/cm scale: what I expect is that every white pixel should be mapped from the original position to the new position (in mm/cm); what happen is certainly not what I expect.
I'm not sure that interp2 is the right command to use.
I don't want to resize the image, I just want to go from pixel world to mm/cm world.
pixDim is of course the dimension of the image pixel, obtained dividing the height of the ear in cm by the height of the ear in mm (and it is on average 0.019 cm).
Any ideas?
EDIT: I was quite sure that the code had no sense, but someone told me to do that way...anyway, if I have two edged ears, I need first to scale both the the real dimension and then perform some operations on them. What I mean with "real dimension" is that if one has size 6.5x3.5cm and the other has size 6x3.2cm, I need to perform operations on this dimensions.
I don't get how can I move from the pixel dimension to cm dimension BEFORE doing operation.
I want to move from one world to the other because I want to get rid of the capturing distance (because I suppose that if a picture of the ear is taken near and the other is taken far, they should have different size in pixel dimension).
Am I correct? There is a way to do it? I thought I can plot the ear scaling the axis, but then I suppose I cannot subtract one from the other, right?
Matlab does not use units. To apply your factor of 0.019cm/pixel you have to scale by a factor of 0.019 to have a 1cm grid, but this would cause any artefact below a size of 1cm to be lost.
Best practice is to display the data using multiple axis, one for cm and one for pixels. It's explained here: http://www.mathworks.de/de/help/matlab/creating_plots/using-multiple-x-and-y-axes.html
Any function processing the data should be independent of the scale or use the scale factor as an input argument, everything else is a sign of some serious algorithmic issues.

Resources