I have a point cloud of an object, obtained with a laser scanner, and a CAD surface model of that object.
How can I match the point cloud to the surface, to obtain the translation and rotation between cloud and model?
I suppose I could sample the surface and try the Iterative Closest Point (ICP) algorithm to match the resulting sampled point cloud to the scanner point cloud.
Would that actually work?
And are there better algorithms for this task?
In new OpenCV, I have implemented a surface matching module to match a 3D model to a 3D scene. No initial pose is required and the detection process is fully automatic. The model also involves an ICP.
To get an idea, please check that out a video here (though it is not generated by the implementation in OpenCV):
https://www.youtube.com/watch?v=uFnqLFznuZU
The full source code is here and the documentation is here.
You mentioned that you needed to sample your CAD model. This is correct and we have given a sampling algorithm suited for point pair feature matching, such as the one implemented in OpenCV:
Birdal, Tolga, and Slobodan Ilic. A point sampling algorithm for 3D matching of irregular geometries. 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2017.
http://campar.in.tum.de/pub/tbirdal2017iros/tbirdal2017iros.pdf
Yes, ICP can be applied to this problem, as you suggest with sampling the surface. It would be best if you have all available faces in your laser scan otherwise you may have to remove invisible faces from your model (depending on how many of these there are).
One way of automatically preparing a model by getting rid of some of the hidden faces is to calculate the concave hull which can be used to discard hidden faces (which are for example faces that are not close to the concave hull). Depending on how involved the model is this may or may not be necessary.
ICP works well if given a good initial guess because it ignores points that are not close with respect to the current guess. If ICP is not coming up with a good alignment you may try it with multiple random restarts to try and fix this problem, choosing the best alignment.
A more involved solution is to do local feature matching. You sample and calculate an invariant descriptor like SHOT or FPFH. You find the best matches, reject non-consistent matches, use them to come up with a good initial alignment and then refine with ICP. But you may not need this step depending on how robust and fast the random-restart ICP is.
There's an open source library for point cloud algorithms which implements registration against other point clouds. May be you can try some of their methods to see if any fit.
As a starter, if they don't have anything specific to fit against a polygon mesh, you can treat the mesh vertices as another point cloud and fit your point cloud against it. This is something that they definitely support.
Related
I hope that this is the correct place to ask this kind of question. I am developing a web app to design garden ponds and I need to calculate the shape and size of the foil needed to cover that pond. The pond will provided as a 3D model (threeJS). The shape of the pond will be relatively simple (think one or more rectangular boxes potentially with some stairs).
I am considering folding out the surface of the 3D model into a flat shape, but I do not know how to do that in a generic way. And even if a could od that it would not be the complete solution (but potentially it would be a starting point) I have been searching for a generic algorithm to do this, but so far have not found anything. Does anyone know of an algorithm that I could use for this, or at least something that I could start with.
Some additonal information:
this will be a browser based solution which should show the pool; one option would be ThreeJS since I am somewhat familiar with it
the foil that should cover the pond needs to watertight, so it needs to be one piece. That means that when your put it in the pool, it form rinkle, especially in the corners.
I have set of binary images, on which i need to find the cross (examples attached). I use findcontours to extract borders from the binary image. But i can't understand how can i determine is this shape (border) cross or not? Maybe opencv has some built-in methods, which could help to solve this problem. I thought to solve this problem using Machine learning, but i think there is a simpler way to do this. Thanks!
Viola-Jones object detection could be a good start. Though the main usage of the algorithm (AFAIK) is face detection, it was actually designed for any object detection, such as your cross.
The algorithm is Machine-Learning based algorithm (so, you will need a set of classified "crosses" and a set of classified "not crosses"), and you will need to identify the significant "features" (patterns) that will help the algorithm recognize crosses.
The algorithm is implemented in OpenCV as cvHaarDetectObjects()
From the original image, lets say you've extracted sets of polygons that could potentially be your cross. Assuming that all of the cross is visible, to the extent that all edges can be distinguished as having a length, you could try the following.
Reject all polygons that did not have exactly 12 vertices required to
form your polygon.
Re-order the vertices such that the shortest edge length is first.
Create a best fit perspective transformation that maps your vertices onto a cross of uniform size
Examine the residuals generated by using this transformation to project your cross back onto the uniform cross, where the residual for any given point is the distance between the projected point and the corresponding uniform point.
If all the residuals are within your defined tolerance, you've found a cross.
Note that this works primarily due to the simplicity of the geometric shape you're searching for. Your contours will also need to have noise removed for this to work, e.g. each line within the cross needs to be converted to a single simple line.
Depending on your requirements, you could try some local feature detector like SIFT or SURF. Check OpenSURF which is an interesting implementation of the latter.
after some days of struggle, i came to a conclusion that the only robust way here is to use SVM + HOG. That's all.
You could erode each blob and analyze their number of pixels is going down. No mater the rotation scaling of the crosses they should always go down with the same ratio, excepted when you're closing down on the remaining center. Again, when the blob is small enough you should expect it to be in the center of the original blob. You won't need any machine learning algorithm or training data to resolve this.
Is it possible to make a 3D representation of an object by capturing many different angles using a webcam? If it is, how is it possible and how is the image-processing done?
My plan is to make a 3D representation of a person using a webcam, then from the 3D representation, i will be able to tell the person's vital statistics.
As Bart said (but did not post as an actual answer) this is entirely possible.
The research topic you are interested in is often called multi view stereo or something similar.
The basic idea resolves around using point correspondences between two (or more) images and then try to find the best matching camera positions. When the positions are found you can use stereo algorithms to back project the image points into a 3D coordinate system and form a point cloud.
From that point cloud you can then further process it to get the measurements you are looking for.
If you are completely new to the subject you have some fascinating reading to look forward to!
Bart proposed Multiple view geometry by Hartley and Zisserman, which is a very nice book indeed.
As Bart and Kigurai pointed out, this process has been studied under the title of "stereo" or "multi-view stereo" techniques. To be able to get a 3D model from a set of pictures, you need to do the following:
a) You need to know the "internal" parameters of a camera. This includes the focal length of the camera, the principal point of the image and account for radial distortion in the image.
b) You also need to know the position and orientation of each camera with respect to each other or a "world" co-ordinate system. This is called the "pose" of the camera.
There are algorithms to perform (a) and (b) which are described in Hartley and Zisserman's "Multiple View Geometry" book. Alternatively, you can use Noah Snavely's "Bundler" http://phototour.cs.washington.edu/bundler/ software to also do the same thing in a very robust manner.
Once you have the camera parameters, you essentially know how a 3D point (X,Y,Z) in the world maps to an image co-ordinate (u,v) on the photo. You also know how to map an image co-ordinate to the world. You can create a dense point cloud by searching for a match for each pixel on one photo in a photo taken from a different view-point. This requires a two-dimensional search. You can simplify this procedure by making the search 1-dimensional. This is called "rectification". You essentially take two photos and transform then so that their rows correspond to the same line in the world (simplified statement). Now you only have to search along image rows.
An algorithm for this can be also found in Hartley and Zisserman.
Finally, you need to do the matching based on some measure. There is a lot of literature out there on "stereo matching". Another word used is "disparity estimation". This is basically searching for the match of pixel (u,v) on one photo to its match (u, v') on the other photo. Once you have the match, the difference between them can be used to map back to a 3D point.
You can use Yasutaka Furukawa's "CMVS" or "PMVS2" software to do this. Or if you want to experiment by yourself, openCV is a open-source computer vision toolbox to do many of the sub-tasks required for this.
This can be done with two webcams in the same ways your eyes work. It is called stereoscopic vision.
Have a look at this:
http://opencv.willowgarage.com/documentation/camera_calibration_and_3d_reconstruction.html
An affordable alternative to get 3D data would be the Kinect camera system.
Maybe not the answer you are hoping for but Microsoft's Kinect is doing that exact thing, there are some open source drivers out there that allow you to connect it to your windows/linux box.
I have a lot of polygons. Ideally, all the polygons must not overlap one other, but they can be located adjacent to one another.
But practically, I would have to allow for slight polygon overlap ( defined by a certain tolerance) because all these polygons are obtained from user hand drawing input, which is not as machine-precised as I want them to be.
My question is, is there any software library components that:
Allows one to input a range of polygons
Check if the polygons are overlapped more than a prespecified tolerance
If yes, then stop, or else, continue
Create mesh in terms of coordinates and elements for the polygons by grouping common vertex and edges together?
More importantly, link back the mesh edges to the original polygon(s)'s edge?
Or is there anyone tackle this issue before?
This issue is a daily "bread" of GIS applications - this is what is exactly done there. We also learned that at a GIS course. Look into GIS systems how they address this issue. E.g. ArcGIS define so called topology rules and has some functions to check if the edited features are topologically correct. See http://webhelp.esri.com/arcgisdesktop/9.2/index.cfm?TopicName=Topology_rules
This is pretty long, only because the question is so big. I've tried to group my comments based on your bullet points.
Components to draw polygons
My guess is that you'll have limited success without providing more information - a component to draw polygons will be very much coupled to the language and UI paradigm you are using for the rest of your project, ie. code for a web component will look very different to a native component.
Perhaps an alternative is to separate this element of the process out from the rest of what you're trying to do. There are some absolutely fantastic pre-existing editors that you can use to create 2d and 3d polygons.
Inkscape is an example of a vector graphics editor that makes it easy to enter 2d polygons, and has the advantage of producing output SVG, which is reasonably easy to parse.
In three dimensions Blender is an open source editor that can be used to produce arbitrary geometries that can be exported to a number of formats.
If you can use a google-maps API (possibly in an native HTML rendering control), and you are interested in adding spatial points on a map overlay, you may be interested in related click-to-draw polygon question on stackoverflow. From past experience, other map APIs like OpenLayers support similar approaches.
Check whether polygons are overlapped
Thomas T made the point in his answer, that there are families of related predicates that can be used to address this and related queries. If you are literally just looking for overlaps and other set theoretic operations (union, intersection, set difference) in two dimensions you can use the General Polygon Clipper
You may also need to consider the slightly more generic problem when two polygons that don't overlap or share a vertex when they should. You can use a Minkowski sum to dilate (enlarge) two and three dimensional polygons to avoid such problems. The Computational Geometry Algorithms Library has robust implementations of these algorithms.
I think that it's more likely that you are really looking for a piece of software that can perform vertex welding, Christer Ericson's book Real-time Collision Detection includes extensive and very readable description of the basics in this field, and also on related issues of edge snapping, crack detection, T-junctions and more. However, even though code snippets are included for that book, I know of no ready made library that addresses these problems, in particular, no complete implementation is given for anything beyond basic vertex welding.
Obviously all 3D packages (blender, maya, max, rhino) all include built in software and tools to solve this problem.
Group polygons based on vertices
From past experience, this turned out to be one of the most time consuming parts of developing software to solve problems in this area. It requires reasonable understanding of graph theory and algorithms to traverse boundaries. It is worth relying upon a solid geometry or graph library to do the heavy lifting for you. In the past I've had success with igraph.
Link the updated polygons back to the originals.
Again, from past experience, this is just a case of careful bookkeeping, and some very careful design of your mesh classes up-front. I'd like to give more advice, but even after spending a big chunk of the last six months on this, I'm still struggling to find a "nice" way to do this.
Other Comments
If you're interacting with users, I would strongly recommend avoiding this issue where possible by using an editor that "snaps", rounding all user entered points onto a grid. This will hopefully significantly reduce the amount of work that you have to do.
Yes, you can use OGR. It has python bindings. Specifically, the Geometry class has an Intersects method. I don't fully understand what you want in points 4 and 5.
I'm using the libraries OpenCV for image processing in C + + and this is my question: can you think possible to do a facial recognition (saying the name of a person based on a database of photos) by comparing the frame of videocamera with images in a database using the technique of image histograms comparison? (Note that i compare only the facial region of an image using an example included in the opecv libraries).
I'm asking this because i've just tried to do a program like above but i have a lot of problem (often i detect the wrong person)
You might want to start with compiling the Face Detection using OpenCV example. As others have pointed out, general facial recognition isn't exactly an easy problem to solve. EigenFaces is one common technique for face recognition that is fairly easy to understand and implement.
As others have stated, it's a hard problem, but this gives you a place to start.
Some method I had experience with them are
metric learning for comparing faces
naming video characters: they use SIFT descriptors computed at specific feducial points on each face. Their code worked quite well for me in the past.
A dataset and benchmark that is dedicated for this task is labeled faces in the wild. You can find there references to working methods for comparing faces after detection.
UPDATE:
I have a description of an experiment on face clustering: unsupervised face identification.
The experiment is described in Section 4.4 of my thesis.
The basic flow is as follows
Metric learning: how to determine if two faces are of the same person or not.
This part is supervised, in the sense that it requires as input face images labeled with the identity of the person who appears in each photo.
a. Detect fiducial points (eyes, corner of mouth, nose).
You may use this code, or more recent versions such as this one.
b. Extract SIFT descriptors at the detected fiducial points.
c. Construct a "face descriptor": each face is described using a single vector.
This vector is a concatenation of the sqrt of all the SIFT descriptors.
d. Use the method described here to learn a mahalanobis distance between faces of different persons.
Unsupervised face identification: Once a metric was learned, you may use new photos of new people (these people need not be part of the training set, you may use photos of unseen-before people!).
a. Repeat stages a-c to construct the same "face descriptor" vector for each input face.
b. Compare the descriptor vectors using the learned mahalanobis distance.
I suggest using an existing algorithm such as the one available in the Luxand FaceSDK: http://www.luxand.com/facesdk/ rather than trying to develop your own.
there are 3 builtin techniques for face-recognition in opencv now, pca(eigenfaces), lda(fisherfaces) and lbph.
nice example code:
https://github.com/Itseez/opencv/blob/master/samples/cpp/facerec_demo.cpp