Surface Reconstruction given point cloud and surface normals - algorithm

I have a .xyz file that has irregularly spaced points and gives the position and surface normal (ie XYZIJK). Are there algorithms out there that can reconstruct the surface that factor in the IJK vectors? Most algorithms I have found assume that surface normals aren't known.
This would ultimately be used to plot surface error data (from the nominal surface) using python 3.x, and I'm sure I will have many more follow on questions once I find a good reconstruction algorithm.

The state of the art right now is Poisson Surface Reconstruction and its screened variant. Code for both is available, e.g. under http://www.cs.jhu.edu/~misha/Code/PoissonRecon/Version8.0/. It is also implemented in MeshLab if you want to take a quick look.
If you want to take a look at other methods, check out this STAR. Page three has a table of a couple of approaches and their inputs.

Related

How to avoid hole filling in surface reconstruction?

I am using Poisson surface reconstruction algorithm to reconstruct triangulated mesh surface from points. However, Poisson will always generate a watertight surface, which fills all holes with interpolation.
For some small holes that is the result of data missing, this hole filling is desirable. But for some big holes, I do not want hole filling and just want the surface to remain open.
The figure above shows my idea, the left one is a pointset with normal, the right one is reconstructed surface. I want the top of this surface remains open rather than current watertight result.
Can anyone provide me with some advice, how may I keep these big holes in Poisson surface reconstruction? Or is there any other algorithms that could solve this?
P.S.
Based on the accepted answer to this question, I understand surface reconstruction algorithms could be categorized as explicit ones and implicit ones. Poisson is implicit ones, and explicit ones could naturally handle big hole problem. But since the points data I have are mostly sparse and noisy, I would prefer an implicit one like Poisson.
Your screenshots look like you might be using MeshLab's implementation which is based on an old implementation. This implementation is not capable of trimming the surface.
The latest implementation, however, contains the SurfaceTrimmer that does exactly what you want. Take a look at the examples at the bottom of the page to see how to use it.
To use SurfaceTrimmer program, you have to first use SSDRecon program to reconstruct a mesh surface with --density, then setting a trim value would exactly remove faces under a specific threshold.
Below is a sample usage of that program on the demo eagle data
./SSDRecon --in eagle.points.ply --out eagle.screened.color.ply --depth 10 --density
./SurfaceTrimmer --in eagle.screened.color.ply --out eagle.screened.color.trimmed.ply --trim 7

Finding cross on the image

I have set of binary images, on which i need to find the cross (examples attached). I use findcontours to extract borders from the binary image. But i can't understand how can i determine is this shape (border) cross or not? Maybe opencv has some built-in methods, which could help to solve this problem. I thought to solve this problem using Machine learning, but i think there is a simpler way to do this. Thanks!
Viola-Jones object detection could be a good start. Though the main usage of the algorithm (AFAIK) is face detection, it was actually designed for any object detection, such as your cross.
The algorithm is Machine-Learning based algorithm (so, you will need a set of classified "crosses" and a set of classified "not crosses"), and you will need to identify the significant "features" (patterns) that will help the algorithm recognize crosses.
The algorithm is implemented in OpenCV as cvHaarDetectObjects()
From the original image, lets say you've extracted sets of polygons that could potentially be your cross. Assuming that all of the cross is visible, to the extent that all edges can be distinguished as having a length, you could try the following.
Reject all polygons that did not have exactly 12 vertices required to
form your polygon.
Re-order the vertices such that the shortest edge length is first.
Create a best fit perspective transformation that maps your vertices onto a cross of uniform size
Examine the residuals generated by using this transformation to project your cross back onto the uniform cross, where the residual for any given point is the distance between the projected point and the corresponding uniform point.
If all the residuals are within your defined tolerance, you've found a cross.
Note that this works primarily due to the simplicity of the geometric shape you're searching for. Your contours will also need to have noise removed for this to work, e.g. each line within the cross needs to be converted to a single simple line.
Depending on your requirements, you could try some local feature detector like SIFT or SURF. Check OpenSURF which is an interesting implementation of the latter.
after some days of struggle, i came to a conclusion that the only robust way here is to use SVM + HOG. That's all.
You could erode each blob and analyze their number of pixels is going down. No mater the rotation scaling of the crosses they should always go down with the same ratio, excepted when you're closing down on the remaining center. Again, when the blob is small enough you should expect it to be in the center of the original blob. You won't need any machine learning algorithm or training data to resolve this.

3D to 2D matching

I have point clouds (each point has a colour) of objects and images that show these objects. I want to find interest points in 2D/3D and match those so I know which parts of my image (at least those that had interest points) are found in the point cloud.
So I would need to find interest points first, get their descriptors and match them. If possible, this should work with current fast and memory conserving algorithms like BRISK or ORB (no patented algorithms!) from OpenCV. But I don't know how to implement them for 3D. Is this even possible? I found a paper (Hough Transform and 3D SURF for robust three dimensional classification) that talked about a 3D extension to SURF which would be a start but I can't find any info about that 3D extension. Even then, the question would be how feasible such an extension would be for BRISK or others current algorithms.
So please, give me advice on how to proceed.
It's called epipolar geometry and stereo matching.
1) You would need two images(2D) that you generated 3D point cloud from.
2) From those two images, you can create fundamental matrix and then generate epipolar points. Quite easy to do if you do it in MATLAB, not sure about OpenCV.
3) Those epipolar points from two separate images will draw lines to 3D world.
I suggest you to read about epipolar geometry and stereo matching for 2D -> 3D

Image Warp Filter - Algorithm and Rasterization

I'd like to implement a Filter that allows resampling of an image by moving a number of control points that mark edges and tangent directions. The goal is to be able to freely transform an image as seen in Photoshop when you use "Free Transform" and chose Warpmode "Custom". The image is fitted into a some kind of Spline-Patch (if that is a valid name) that can be manipulated.
I understand how simple splines (paths) work but how do you connect them to form a patch?
And how can you sample such a patch to render the morphed image? For each pixel in the target I'd need to know what pixel in the source image corresponds. I don't even know where to start searching...
Any helpful info (keywords, links, papers, reference implementations) are greatly appreciated!
This document will get you a good insight into warping: http://www.gson.org/thesis/warping-thesis.pdf
However, this will include filtering out high frequencies, which will make the implementation a lot more complicated but will give a better result.
An easy way to accomplish what you want to do would be to loop through every pixel in your final image, plug the coordinates into your splines and retrieve the pixel in your original image. This pixel might have coordinates 0.4/1.2 so you could bilinearly interpolate between 0/1, 1/1, 0/2 and 1/2.
As for splines: there are many resources and solutions online for the 1D case. As for 2D it gets a bit trickier to find helpful resources.
A simple example for the 1D case: http://www-users.cselabs.umn.edu/classes/Spring-2009/csci2031/quad_spline.pdf
Here's a great guide for the 2D case: http://en.wikipedia.org/wiki/Bicubic_interpolation
Based upon this you could derive an own scheme for splines for the 2D case. Define a bivariate (with x and y) polynomial and set your constraints to solve for the coefficients of the polynomial.
Just keep in mind that the borders of the spline patches have to be consistent (both in value and derivative) to avoid ugly jumps.
Good luck!

Dense pixelwise reverse projection

I saw a question on reverse projecting 4 2D points to derive the corners of a rectangle in 3D space. I have a kind of more general version of the same problem:
Given either a focal length (which can be solved to produce arcseconds / pixel) or the intrinsic camera matrix (a 3x2 matrix that defines the properties of the pinhole camera model being used - it's directly related to focal length), compute the camera ray that goes through each pixel.
I'd like to take a series of frames, derive the candidate light rays from each frame, and use some sort of iterative solving approach to derive the camera pose from each frame (given a sufficiently large sample, of course)... All of that is really just massively-parallel implementations of a generalized Hough algorithm... it's getting the candidate rays in the first place that I'm having the problem with...
A friend of mine found the source code from a university for the camera matching in PhotoSynth. I'd Google around for it, if I were you.
That's a good suggestion... and I will definitely look into it (photosynth kind of resparked my interest in this subject - but I've been working on it for months for robochamps) - but it's a sparse implementation - it looks for "good" features (points in the image that should be easily identifiable in other views of the same image), and while I certainly plan to score each match based on how good the feature it's matching is, I want the full dense algorithm to derive every pixel... or should I say voxel lol?
After a little poking around, isn't it the extrinsic matrix that tells you where the camera actually is in 3-space?
I worked at a company that did a lot of this, but I always used the tools that the algorithm guys wrote. :)

Resources