Matching path in terrain - algorithm

I need to match a path recorded by lidar (x/y/height) onto a map tile (pixel height field)
I can assume the problem is 2.5D (ie a unique height for each point, no caverns) and the region is small enough that the grid is uniform (don't need to consider curvature). Naturally the track data is noisy and I don't have any known locations in advance.
Rather than do a full 3D point based Iterative Closest Point are there any simple algorithms for purely surface path matching I should take a look at ?
Specifically it seems to be an image processing problem(x,y,height=intensity) so some sort of snake matching algorithm?

Instead of brute force (calculate error for full path from each starting point) you can only expand the best points and potentially save a lot of work.
Normalize:
Subtract (pathMinX, pathMinY) from all points in the path.
Subtract (gridMinX, gridMinY) from all points in the grid.
Find pathMaxX, pathMaxY, gridMaxX, gridMaxY.
deltaMaxX = gridMaxX - pathMaxX, deltaMaxY = gridMaxY - pathMaxY.
Create an array or list with (deltaMaxX + 1) * (deltaMaxY + 1) nodes for all the combinations of deltaX and deltaY. Each node has to hold the following information:
index, initialize with 0
deltaX and deltaY, initialize with the loop counters
error, initialize with (path[0].height - grid[path[0].x + deltaX][path[0].y + deltaY].height)^2
Sort the array by error.
While arr[0].error <= arr[1].error:
arr[0].index++
if arr[0].index == n: return (arr[0].deltaX, arr[0].deltaY)
arr[0].error += (path[arr[0].index].height - grid[path[arr[0].index].x + arr[0].deltaX][path[arr[0].index].y + arr[0].deltaY].height)^2
Remove node 0 and insert it at the correct position, so that the array becomes sorted again.
Repeat step 6. and 7. until a solution was returned.
You can (should, I think it's worth it) further improve the algorithm if you first sort the points in the path by extreme heights (lowest and highest points first and then moving towards the average height in regard to the grid, e.g. sort by abs(height - averageGridHeight) descending). Like this large errors are produced early and therefore branches can be cut a lot earlier.

Purely, theoretical, but what if you encode the 3D data as grayscale images (3D path as grayscale image, height map is already something like that probably, just need to ensure scales make sense).
If you have the 3D path as a grayscale image and the height map as a grayscale image perhaps you could do a needle in haystack search using computer vision techniques. For example in, OpenCV there are a couple of techniques for finding a subimage in a larger image:
Template Matching - (overly simplified) uses a sliding window doing pixel comparison
Chamfer Matching - making more use of edges - probably more suitable for your goal. bare in mind I ran into an allocation bug last time using this: it does work in the end, but needs a bit of love (doing malloc fixes and keeping track of the cost to get read of false positives) but there are options to handle the scale difference
Template matching examples:
Chamfer matching example:

Related

Speeding up a pre-computed seam carving algorithm

I'm writing a JS seam carving library. It works great, I can rescale a 1024x1024 image very cleanly in real time as fast as I can drag it around. It looks great! But in order to get that performance I need to pre-compute a lot of data and it takes about 10 seconds. I'm trying to remove this bottleneck and am looking for ideas here.
Seam carving works by removing the lowest energy "squiggly" line of pixels from an image. e.g. If you have a 10x4 image a horizontal seam might look like this:
........x.
.x.....x.x
x.xx..x...
....xx....
So if you resize it to 10x3 you remove all the 'X' pixels. The general idea is that the seams go around the things that look visually important to you, so instead of just normal scaling where everything gets squished, you're mostly removing things that look like whitespace, and the important elements in a picture are unaffected.
The process of calculating energy levels, removing them, and re-calculating is rather expensive, so I pre-compute it in node.js and generate a .seam file.
Each seam in the .seam file is basically: starting position, direction, direction, direction, direction, .... So for the above example you'd have:
starting position: 2
seam direction: -1 1 0 1 0 -1 -1 -1 1
This is quite compact and allows me to generate .seam files in ~60-120kb for a 1024x1024 image depending on settings.
Now, in order to get fast rendering I generate a 2D grids that represents the order in which pixels should be removed. So:
(figure A):
........1.
.1.....1.1
1.11..1...
....11....
contains 1 seam of info, then we can add a 2nd seam:
(figure B):
2...2....2
.222.2.22.
......2...
and when merged you get:
2...2...12
.122.2.1.1
1211..122.
....112...
For completeness we can add seams 3 & 4:
(figures C & D):
33.3..3...
..3.33.333
4444444444
and merge them all into:
(figure E):
2343243412
3122424141
1211331224
4434112333
You'll notice that the 2s aren't all connected in this merged version, because the merged version is based on the original pixel positions, whereas the seam is based on the pixel positions at the moment the seam is calculated which, for this 2nd seam, is a 10x3px image.
This allows the front-end renderer to basically just loop over all the pixels in an image and filter them against this grid by number of desired pixels to remove. It runs at 100fps on my computer, meaning that it's perfectly suitable for single resizes on most devices. yay!
Now the problem that I'm trying to solve:
The decoding step from seams that go -1 1 0 1 0 -1 -1 -1 1 to the pre-computed grid of which pixels to remove is slow. The basic reason for this is that whenever one seam is removed, all the seams from there forward get shifted.
The way I'm currently calculating the "shifting" is by splicing each pixel of a seam out of a 1,048,576 element array (for a 1024x1024 px image, where each index is x * height + y for horizontal seams) that stores the original pixel positions. It's veeerrrrrryyyyy slow running .splice a million times...
This seems like a weird leetcode problem, in that perhaps there's a data structure that would allow me to know "how many pixels above this one have already been excluded by a seam" so that I know the "normalized index". But... I can't figure it out, anything I can think of requires too many re-writes to make this any faster.
Or perhaps there might be a better way to encode the seam data, but using 1-2 bits per pixel of the seam is very efficient, and anything else I can come up with would make those files huge.
Thanks for taking the time to read this!
[edit and tl;dr] -- How do I efficiently merge figures A-D into figure E? Alternatively, any ideas that yield figure E efficiently, from any compressed format
If I understand correct your current algorithm is:
while there are pixels in Image:
seam = get_seam(Image)
save(seam)
Image = remove_seam_from_image(Image, seam)
You then want to construct an array containing the numbers of each seam.
To do so, you could make a 1024x1024 array where each value is the index of that element of the array (y*width+x). Call this Indices.
A modified algorithm then gives you what you want
Let Indices have the dimensions of Image and be initialized to [0, len(Image)0
Let SeamNum have the dimensions of Image and be initialized to -1
seam_num = 0
while there are pixels in Image:
seam = get_seam(Image)
Image = remove_seam_from_image(Image, seam)
Indices = remove_seam_from_image_and_write_seam_num(Indices, seam, SeamNum, seam_num)
seam_num++
remove_seam_from_image_and_write_seam_num is conceptually identical to remove_seam_from_image except that as it walks seam to remove each pixel from Indices it writes seam_num to the location in SeamNum indicated by the pixel's value in Indices.
The output is the SeamNum array you're looking for.

MATLAB image processing technique

I have this 3D array in MATLAB (V: vertical, H: horizontal, t: time frame)
Figures below represent images obtained using imagesc function after slicing the array in terms of t axis
area in black represents damage area and other area is intact
each frame looks similar but has different amplitude
I am trying to visualize only defect area and get rid of intact area
I tried to use 'threshold' method to get rid of intact area as below
NewSet = zeros(450,450,200);
for kk = 1:200
frame = uwpi(:,:,kk);
STD = std(frame(:));
Mean = mean(frame(:));
for ii = 1:450
for jj =1:450
if frame(ii, jj) > 2*STD+Mean
NewSet(ii, jj, kk) = frame(ii, jj);
else
NewSet(ii, jj, kk) = NaN;
end
end
end
end
However, since each frame has different amplitude, result becomes
Is there any image processing method to get rid of intact area in this case?
Thanks in advance
You're thresholding based on mean and standard deviation, basically assuming your data is normally distributed and looking for outliers. But your model should try to distinguish values around zero (noise) vs higher values. Your data is not normally distributed, mean and standard deviation are not meaningful.
Look up Otsu thresholding (MATLAB IP toolbox has it). It's model does not perfectly match your data, but it might give reasonable results. Like most threshold estimation algorithms, it uses the image's histogram to determine the optimal threshold given some model.
Ideally you'd model the background peak in the histogram. You can find the mode, fit a Gaussian around it, then cut off at 2 sigma. Or you can use the "triangle method", which finds the point along the histogram that is furthest from the line between the upper end of the histogram and the top of the background peak. A little more complex to explain, but trivial to implement. We have this implemented in DIPimage (http://www.diplib.org), M-file code is visible so you can see how it works (look for the function threshold)
Additionally, I'd suggest to get rid of the loops over x and y. You can type frame(frame<threshold) = nan, and then copy the whole frame back into NewSet in one operation.
Do I clearly understand the question, ROI is the dark border and all it surrounds? If so I'd recommend process in 3D using some kind of region-growing technique like watershed or active snakes with markers by imregionalmin. The methods should provide segmentation result even if the border has small holes. Than just copy segmented object to a new 3D array via logic indexing.

Field of view/ convexity map

On a shape from a logical image, I am trying to extract the field of view from any point inside the shape on matlab :
I tried something involving to test each line going through the point but it is really really long.(I hope to do it for each points of the shape or at least each point of it's contour wich is quite a few times)
I think a faster method would be working iteratively by the expansion of a disk from the considered point but I am not sure how to do it.
How can I find this field of view in an efficient way?
Any ideas or solution would be appreciated, thanks.
Here is a possible approach (the principle behind the function I wrote, available on Matlab Central):
I created this test image and an arbitrary point of view:
testscene=zeros(500);
testscene(80:120,80:120)=1;
testscene(200:250,400:450)=1;
testscene(380:450,200:270)=1;
viewpoint=[250, 300];
imsize=size(testscene); % checks the size of the image
It looks like this (the circle marks the view point I chose):
The next line computes the longest distance to the edge of the image from the viewpoint:
maxdist=max([norm(viewpoint), norm(viewpoint-[1 imsize(2)]), norm(viewpoint-[imsize(1) 1]), norm(viewpoint-imsize)]);
angles=1:360; % use smaller increment to increase resolution
Then generate a set of points uniformly distributed around the viewpoint.:
endpoints=bsxfun(#plus, maxdist*[cosd(angles)' sind(angles)'], viewpoint);
for k=1:numel(angles)
[CX,CY,C] = improfile(testscene,[viewpoint(1), endpoints(k,1)],[viewpoint(2), endpoints(k,2)]);
idx=find(C);
intersec(k,:)=[CX(idx(1)), CY(idx(1))];
end
What this does is drawing lines from the view point to each directions specified in the array angles and look for the position of the intersection with an obstacle or the edge of the image.
This should help visualizing the process:
Finally, let's use the built-in roipoly function to create a binary mask from a set of coordinates:
FieldofView = roipoly(testscene,intersec(:,1),intersec(:,2));
Here is how it looks like (obstacles in white, visible field in gray, viewpoint in red):

distinguishing objects with opencv

I want to identify lego bricks for building a lego sorting machine (I use c++ with opencv).
That means I have to distinguish between objects which look very similar.
The bricks are coming to my camera individually on a flat conveyer. But they might lay in any possible way: upside down, on the side or "normal".
My approach is to teach the sorting machine the bricks by taping them with the camera in lots of different positions and rotations. Features of each and every view are calculated by surf-algorythm.
void calculateFeatures(const cv::Mat& image,
std::vector<cv::KeyPoint>& keypoints,
cv::Mat& descriptors)
{
// detector == cv::SurfFeatureDetector(10)
detector->detect(image,keypoints);
// extractor == cv::SurfDescriptorExtractor()
extractor->compute(image,keypoints,descriptors);
}
If there is an unknown brick (the brick that i want to sort) its features also get calculated and matched with known ones.
To find wrongly matched features I proceed as described in the book OpenCV 2 Cookbook:
with the matcher (=cv::BFMatcher(cv::NORM_L2)) the two nearest neighbours in both directions are searched
matcher.knnMatch(descriptorsImage1, descriptorsImage2,
matches1,
2);
matcher.knnMatch(descriptorsImage2, descriptorsImage1,
matches2,
2);
I check the ratio between the distances of the found nearest neighbours. If the two distances are very similar it's likely that a false value is used.
// loop for matches1 and matches2
for(iterator matchIterator over all matches)
if( ((*matchIterator)[0].distance / (*matchIterator)[1].distance) > 0.65 )
throw away
Finally only symmatrical match-pairs are accepted. These are matches in which not only n1 is the nearest neighbour to feature f1, but also f1 is the nearest neighbour to n1.
for(iterator matchIterator1 over all matches)
for(iterator matchIterator2 over all matches)
if ((*matchIterator1)[0].queryIdx == (*matchIterator2)[0].trainIdx &&
(*matchIterator2)[0].queryIdx == (*matchIterator1)[0].trainIdx)
// good Match
Now only pretty good matches remain. To filter out some more bad matches I check which matches fit the projection of img1 on img2 using the fundamental matrix.
std::vector<uchar> inliers(points1.size(),0);
cv::findFundamentalMat(
cv::Mat(points1),cv::Mat(points2), // matching points
inliers,
CV_FM_RANSAC,
3,
0.99);
std::vector<cv::DMatch> goodMatches
// extract the surviving (inliers) matches
std::vector<uchar>::const_iterator itIn= inliers.begin();
std::vector<cv::DMatch>::const_iterator itM= allMatches.begin();
// for all matches
for ( ;itIn!= inliers.end(); ++itIn, ++itM)
if (*itIn)
// it is a valid match
The result is pretty good. But in cases of extreme alikeness faults still occur.
In the picture above you can see that a similar brick is recognized well.
However in the second picture a wrong brick is recognized just as well.
Now the question is how I could improve the matching.
I had two different ideas:
The matches in the second picture trace back to the features really fitting, but only if the visual field is intensely changed. To recognize a brick I have to compare it in many different positions anyway (at least as shown in figure three). This means I know that I am only allowed to minimally change the visual field. The information how intensely the visual field is changed should be hidden in the fundamental matrix. How can I read out of this matrix how far the position in the room has changed? Especially the rotation and strong scaling should be of interest; if the brick once is taped farer on the left side this shouldn't matter.
Second idea:
I calculated the fundamental matrix out of 2 pictures and filtered out features that don't fit the projections - shouldn't there be a way to do the same using three or more pictures? (keyword Trifocal tensor). This way the matching should become more stable. But I neither know how to do this using OpenCV nor could I find any information on this on google.
I don't have a complete answer, but I have a few suggestions.
On the image analysis side:
It looks like your camera setup is pretty constant. Easy to just separate the brick from the background. I also see your system finding features in the background. This is unnecessary. Set all non-brick pixels to black to remove them from the analysis.
When you have located just the brick, your first step should be to just filter likely candidates based on the size (i.e. number of pixels) in the brick. That way the example faulty match you show is already less likely.
You can take other features into account such as the aspect ratio of the bounding box of the brick, the major and minor axes (eigevectors of the covariance matrix of the central moments) of the brick etc.
These simpler features will give you a reasonable first filter to limit your search space.
On the mechanical side:
If bricks are actually coming down a conveyor you should be able to "straighten" the bricks along a straight edge using something like a rod that lies at an angle to the direction of the conveyor across the belt so that the bricks arrive more uniformly at your camera like so.
Similar to the previous point, you could use something like a very loose brush suspended across the belt to topple bricks standing up as they pass.
Again both these points will limit your search space.

Finding the angle of stripeline/ Angle of rotation

So I’m trying to find the rotational angle for stripe lines in images like the attached photo.
The only assumption is that the lines are parallel, and their orientation is about 90 degrees approximately more or less [say 5 degrees tolerance].
I have to make sure the stripe lines in the result image will be %100 vertical. The quality of the images varies as well as their histogram/greyscale values. So methods based on non-adaptive thresholding already failed for my cases [I’m not interested in thresholding based methods if I cannot make it adaptive]. Also, there are some random black clusters on top of the stripe lines sometimes.
What I did so far:
1) Of course HoughLines is the first option, but I couldn’t make it work for all my images, I had some partial success though following this great article:
http://felix.abecassis.me/2011/09/opencv-detect-skew-angle/.
The main reason of failure to my understanding was that, I needed to fine tune the parameters for different images. Parameters such as Canny/BW/Morphological edge detection (If needed) | parameters for minLinelength/maxLineGap/etc. For sure there’s a way to hack into this and make it work, but, to me this is a fragile solution!
2) What I’m working on right now, is to divide the image to a top slice and a bottom slice, then find the peaks and valleys of each slice. Then basically find the angle using the width of the image and translation of peaks. I’m currently working on finding which peak of the top slice belongs to which of the bottom slice, since there will be some false positive peaks in my computation due to existence of black/white clusters on top of the strip lines.
Example: Location of peaks for slices:
Top Slice = { 1, 33,67,90,110}
BottomSlice = { 3, 14, 35,63,90,104}
I am actually getting similar vectors when extracting peaks. So as can be seen, the length of vector might vary, any idea how can I get a group like:
{{1,3},{33,35},{67,63},{90,90},{110,104}}
I’m open to any idea about improving any of these algorithms or a completely new approach. If needed, I can upload more images.
If you can get a list of points for a single line, a linear regression will give you a formula for the straight line that best fits the points. A simple trig operation will convert the line formula to an angle.
You can probably use some line thinning operation to turn the stripes into a list of points.
You can run an accumulator of spatial derivatives along different angles. If you want a half-degree precision and a sample of 5 lines, you have a maximum 10*5*1500 = 7.5m iterations. You can safely reduce the sampling rate along the line tenfold, which will give you a sample size of 150 points per sample, reducing the number of iterations to less than a million. Somewhere around that point the operation of straightening the image ought to become the bottleneck.

Resources