Point Matching In images with varying number of points - point-clouds

I'm working on sets of biological images (neuron calcium imaging), each set is composed of a dozen of images (same area at a day interval) in each of which we can define a number of source (neuron, the item of interest).
Pre-processing of the image and source definition is working well, the goal now is to follow the evolution of each source (i.e point) image after image.
Different point matching techniques have been implemented and tried using matlab (Iterative Closest Point, Coherant Point Drift, TPS-Robust Point Matching) with TPS-RPM giving the best result.
The problem comes from the nature of the data, new data points can appear in each new images and some will disapear, making it hard to implement proper point matching.
The previously named techniques have been used and works to some degree but outliers are still an issue.
Matching point from image n°1 to n°2 yields decent results with some outliers being identified but the cumulative effect of appearing / disapearing points over 10 images renders the technique very inneffective, as few points from the first images are correctly matched to the corresponding points in the last.
My question then is has anyone faced similar issues (from what I've read most point matching problems are dealing with feature detection / matching on homogeneous data and not so much on heterogeneous, locally varying images) and if one could advise on a specific approach / library, ideally in Python or matlab.
Thank you for reading !

Related

Recognize recurring images into a larger one

Edit: this is not a duplicate of Determine if an image exists within a larger image, and if so, find it, using Python since I do not know the pattern beforehand
Suppose I have a big image (usually a picture taken with a camera so it might be a bit noisy, but let's assume it's not for now) made up of multiple smaller images all equal among themselves, something like
I need to find the contour of each one of those. The first step is recognizing that there's a recurring image (or unknown pattern) in the 2D image. How can I achieve this first step?
I did read around that I might use a FFT of the original image and search for duplicate frequencies, would that be a feasible approach?
To build a bit on the problem: I do not know the image beforehand, nor its size or how many will there be on the big image. The images can be shot from camera so they might be noisy. The images won't overlap.
You can try to use described keypoints (Sift/SURF/ORB/etc.) to find features in the image and try to detect the same features in the image.
You can see such a result in How to find euclidean distance between keypoints of a single image in opencv where 3x the same image is present and features are detected and linked between those subimages automatically.
In your image the result looks like
so you can see that the different occurances of the same pattern is indeed automatically detected and linked.
Next steps would be to group features to objects, so that the "whole" pattern can be extracted. Once you have a candidate for a pattern, you can extract a homography for each occurance of the pattern (with one reference candidate pattern) to verify that it is a pattern. One open problem is how to find such candidates. Maybe it is worth trying to find "parallel features", so keypoint matches that have parallel lines and/or same length lines (see image). Or maybe there is some graph theory approach.
All in all, this whole approach will have some advantages and disadvantes:
Advantages:
real world applicability - Sift and other keypoints are working quite well even with noise and some perspective effects, so chances are increased to find such patterns.
Disadvantages
slow
parametric (define what it means that two features are successfully
matched)
not suitable for all kind of patterns - your pattern must have some extractable keypoints
Those are some thoughts and probably not complete ;)
Unfortunately no full code yet for your concrete task, but I hope the idea is clear.
For such a clean image, it suffices to segment the patterns by blob analysis and to compare the segments or ROI that contain them. The size is a first matching criterion. The SAD, SSD or correlation similarity scores can do finer comparison.
In practice you will face more difficulties such as
not possible to segment the patterns
geometric variations in size/orientation
partial occlusion
...
Handling these is out of the scope of this answer; it makes things much harder than in the "toy" case.
The goal is to find several equal or very similar patterns which are not known before in a picture. As it is this problem is still a bit ill posed.
Are the patterns exactly equal or only similar (added noise maybe)?
Do you want to have the largest possible patterns or are smaller subpatterns okay too or are all possible patterns needed? Reason is that of course each pattern could consist of equal patterns too.
Is the background always that simple (completely white) or can it be much more difficult? What do we know about it?
Are the patterns always equally oriented, equally scaled, non-overlapping?
For the simple case of non-overlapping patterns with simple background, the answer of Yves Daoust using segmentation is well performing but fails if patterns are very close or overlapping.
For other cases the idea of the keypoints by Micka will help but might not perform well if there is noise or might be slow.
I have one alternative: look at correlations of subblocks of the image.
In pseudocode:
Divide the image in overlapping areas of size MxN for a suitable M,N (pixel width and height chosen to be approximately the size of the desired pattern)
Correlate each subblock with the whole image. Look for local maxima in the correlation. The position of these maxima denotes the position of similar regions.
Choose a global threshold on all correlations (smartly somehow) and find sets of equal patterns.
Determine the fine structure of these patterns by shanging the shape from rectangular (bounding box) to a more sophisticaed shape (maybe by looking at the shape of the peaks in the correlation)
In case the approximate size of the desired patterns is not known before, try with large values of M, N and go down to smaller ones.
To speed up the whole process start on a coarse scale (downscaled version of the image) and then process finer scales only where needed. Needs balancing of zooming in and performing correlations.
Sorry, I cannot make this a full Matlab project right now, but I hope this helps you.

Detecting hexagonal shapes in greyscale or binary image

For my bachelor thesis I need to analyse images taken in the ocean to count and measure the size of water particles.
my problem:
besides the wanted water particles, the images show hexagonal patches all over the image in:
- different sizes
- not regular shape
- different greyscale values
(Example image below!)
It is clear that these patches will falsify my image analysis concerning the size and number of particles.
For this reason this patches need to be detected and deleted somehow.
Since it will be just a little part of the work in my thesis, I don't want to spend much time in it and already tried classic ways like: (imageJ)
playing with the threshold (resulting in also deleting wanted water particles)
analyse image including the hexagonal patches and later sort out the biggest areas (the hexagonal patches have quite the biggest areas, but you will still have a lot of haxagons)
playing with filters: using gaussian filter on a duplicated image and subtract the copy from the original deletes many patches (in reducing the greyscale value) but also deletes little wanted water particles and so again falsifies the result
a more complicated and time consuming solution would be to use a implemented library in for example matlab or opencv to detect points, that describe the shapes.
but so far I could not find any code that fits my task.
Does anyone of you have created such a code I could use for my task or any other idea?
You can see a lot of hexagonal patches in different depths also.
the little spots with an greater pixel value are the wanted particles!
Image processing is quite an involved area so there are no hard and fast rules.
But if it was me I would 'Mask' the image. This involves either defining what you want to keep or remove as a pixel 'Mask'. You then scan the mask over the image recursively and compare the mask to the image portion selected. You then select or remove the section (depending on your method) if it meets your criterion.
One such example of a criteria would be the spatial and grey-scale error weighted against a likelihood function (eg Chi-squared, square mean error etc.) or a Normal distribution that you define the uncertainty..
Some food for thought
Maybe you can try with the Hough transform:
https://en.wikipedia.org/wiki/Hough_transform
Matlab have an built-in function, hough, wich implements this, but only works for lines. Maybe you can start from that and change it to recognize hexagons.

Algorithms for finding a look alike face?

I'm doing a personal project of trying to find a person's look-alike given a database of photographs of other people all taken in a consistent manner - people looking directly into the camera, neutral expression and no tilt to the head (think passport photo).
I have a system for placing markers for 2d coordinates on the faces and I was wondering if there are any known approaches for finding a look alike of that face given this approach?
I found the following facial recognition algorithms:
http://www.face-rec.org/algorithms/
But none deal with the specific task of finding a look-alike.
Thanks for your time.
I believe you can also try searching for "Face Verification" rather than just "Face Recognition". This might give you more relevant results.
Strictly speaking, the 2 are actually different things in scientific literature but are sometimes lumped under face recognition. For details on their differences and some sample code, take a look here: http://www.idiap.ch/~marcel/labs/faceverif.php
However, for your purposes, what others such as Edvard and Ari has kindly suggested would work too. Basically they are suggesting a K-nearest neighbor style face recognition classifier.
As a start, you can probably try that. First, compute a feature vector for each of your face images in your database. One possible feature to use is the Local Binary Pattern (LBP). You can find the code by googling it. Do the same for your query image. Now, loop through all the feature vectors and compare them to that of your query image using euclidean distance and return the K nearest ones.
While the above method is easy to code, it will generally not be as robust as some of the more sophisticated ones because they generally fail badly when faces are not aligned (known as unconstrained pose. Search for "Labelled Faces in the Wild" to see the results for state of the art for this problem.) or taken under different environmental conditions. But if the faces in your database are aligned and taken under similar conditions as you mentioned, then it might just work. If they are not aligned, you can use the face key points, which you mentioned you are able to compute, to align the faces. In general, comparing faces which are not aligned is a very difficult problem in computer vision and is still a very active area of research. But, if you only consider faces that look alike and in the same pose to be similar (i.e. similar in pose as well as looks) then this shouldn't be a problem.
The website your gave have links to the code for Eigenfaces and Fisherfaces. These are essentially 2 methods for computing feature vectors for your face images. Faces are identified by doing a K nearest neighbor search for faces in the database with feature vectors (computed using PCA and LDA respectively) closest to that of the query image.
I should probably also mention that in the Fisherfaces method, you will need to have "labels" for the faces in your database to identify the faces. This is because Linear Discriminant Analysis (LDA), the classification method used in Fisherfaces, needs this information to compute a projection matrix that will project feature vectors for similar faces close together and dissimilar ones far apart. Comparison is then performed on these projected vectors. Here lies the difference between Face Recognition and Face Verification: for recognition, you need to have "labels" your training images in your database i.e. you need to identify them.
For verification, you are only trying to tell whether any 2 given faces are of the same person. Often, you don't need the "labelled" data in the traditional sense (although some methods might make use of auxiliary training data to help in the face verification).
The code for computing Eigenfaces and Fisherfaces are available in OpenCV in case you use it.
As a side note:
A feature vector is actually just a vector in your linear algebra sense. It is simply n numbers packed together. The word "feature" refers to something like a "statistic" i.e. a feature vector is a vector containing statistics that characterizes the object it represents. For e.g., for the task of face recognition, the simplest feature vector would be the intensity values of the grayscale image of the face. In that case, I just reshape the 2D array of numbers into a n rows by 1 column vector, each entry containing the value of one pixel. The pixel value here is the "feature", and the n x 1 vector of pixel values is the feature vector. In the LBP case, roughly speaking, it computes a histogram at small patches of pixels in the image and joins these histograms together into one histogram, which is then used as the feature vector. So the Local Binary Pattern is the statistic and the histograms joined together is the feature vector. Together they described the "texture" and facial patterns of your face.
Hope this helps.
These two would seem like the equivalent problem, but I do not work in the field. You essentially have the following two problems:
Face recognition: Take a face and try to match it to a person.
Find similar faces: Take a face and try to find similar faces.
Aren't these equivalent? In (1) you start with a picture that you want to match to the owner and you compare it to a database of reference pictures for each person you know. In (2) you pick a picture in your reference database and run (1) for that picture against the other pictures in the database.
Since the algorithms seem to give you a measure of how likely two pictures belong to the same person, in (2) you just sort the measures in decreasing order and pick the top hits.
I assume you should first analyze all the picture in your database with whatever approach you are using. You should then have a set of metrics for each picture which you can compare a specific picture with and statistically find the closest match.
For example, if you can measure the distance between the eyes, you can find faces that have the same distance. You can then find the face that has the overall closest match and return that.

algorithm - warping image to another image and calculate similarity measure

I have a query on calculation of best matching point of one image to another image through intensity based registration. I'd like to have some comments on my algorithm:
Compute the warp matrix at this iteration
For every point of the image A,
2a. We warp the particular image A pixel coordinates with the warp matrix to image B
2b. Perform interpolation to get the corresponding intensity form image B if warped point coordinate is in image B.
2c. Calculate the similarity measure value between warped pixel A intensity and warped image B intensity
Cycle through every pixel in image A
Cycle through every possible rotation and translation
Would this be okay? Is there any relevant opencv code we can reference?
Comments on algorithm
Your algorithm appears good although you will have to be careful about:
Edge effects: You need to make sure that the algorithm does not favour matches where most of image A does not overlap image B. e.g. you may wish to compute the average similarity measure and constrain the transformation to make sure that at least 50% of pixels overlap.
Computational complexity. There may be a lot of possible translations and rotations to consider and this algorithm may be too slow in practice.
Type of warp. Depending on your application you may also need to consider perspective/lighting changes as well as translation and rotation.
Acceleration
A similar algorithm is commonly used in video encoders, although most will ignore rotations/perspective changes and just search for translations.
One approach that is quite commonly used is to do a gradient search for the best match. In other words, try tweaking the translation/rotation in a few different ways (e.g. left/right/up/down by 16 pixels) and pick the best match as your new starting point. Then repeat this process several times.
Once you are unable to improve the match, reduce the size of your tweaks and try again.
Alternative algorithms
Depending on your application you may want to consider some alternative methods:
Stereo matching. If your 2 images come from stereo camera then you only really need to search in one direction (and OpenCV provides useful methods to do this)
Known patterns. If you are able to place a known pattern (e.g. a chessboard) in both your images then it becomes a lot easier to register them (and OpenCV provides methods to find and register certain types of pattern)
Feature point matching. A common approach to image registration is to search for distinctive points (e.g. types of corner or more general places of interest) and then try to find matching distinctive points in the two images. For example, OpenCV contains functions to detect SURF features. Google has published a great paper on using this kind of approach in order to remove rolling shutter noise that I recommend reading.

Find specific shapes in an image

I have the following problem, I'm working with gel electrophoresis images [A][B] which show DNA fragments (appear as white bands). I want to extract them and analyze them (on the right site is a standard of known size and concentration, which can be extrapolate to the other three samples). Each sample is loaded into a lane. One task is to find the lanes (in this case 4) and the other to extract at which position in the picture a DNA band is present.
I have some problems with finding the bands. I tried already several things, e.g. pixel comparison, edge detection, corner detection, template matching, binary image, but all of them give insufficient results especially if the pictures are bad (might be a bad ran, kind of smearing[C]) or if the bands are to close tot each other.
Since I'm not an image expert, could someone drop some keywords what is usually used in such cases? Actually I'm even not sure whether the problem is about image segmentation or pattern recognition?!
Any hints would be highly appreciated (also books for beginners).
Thanks in advance!
[A] http://en.wikipedia.org/wiki/Gel_electrophoresis
[B]
[C]
In this case, profile extraction will probably do the trick: take a vertical slice of the image across a lane (assuming you have a rough idea of the position), and average the pixel values on every row of the slice. This will give you a 1D signal where the bands appear as distinct peaks of varying heights.
You can detect the peak locations by looking for local maxima (not so robust here), or better by finding sufficiently long increasing and decreasing signal value sequences.
I would more call this a segmentation problem.
Final hint: the lanes might also be located by analysing the profile obtained by averaging on the columns.

Resources